Adobe has been under fire lately, after the American Society of Media Photographers criticized it for its “shocking rejection of photography” over some tone-deaf Photoshop ads it ran a few weeks ago. And now the software giant has been forced to defend itself again, after an outcry on social media over some new Photoshop terms and conditions that started rolling out this week.
In recent days, several high-profile Photoshop users have expressed their dismay on X (formerly Twitter) over a new 'Updated Terms of Use' pop-up that they have been forced to accept. The new fine print contains some seemingly alarming lines, including one that says “we may access your content through both automated and manual methods, such as for content review.”
Adobe has now defended the new conditions in a new blog post. In short, Adobe claims that the slightly ambiguous legal jargon in its new fine print has created an unnecessary furor and that nothing has fundamentally changed. The two key takeaways are that Adobe says it “does not train Firefly Gen AI models on client content” and that it “will never take ownership of a client's work.”
On this last point, Adobe explains that applications like Photoshop need to access our cloud-based content to “perform the functions for which they were designed and used,” such as opening and editing files. The new terms and conditions also affect only cloud-based files, and the fine print states that “we [Adobe] “Do not scan content processed or stored locally on your device.”
Adobe also admits that its new fine print could have been explained better and also stated that “we will clarify the acceptance of the Terms of Use that customers see when opening applications.” But while the statement should help allay some fears, other concerns are likely to remain.
One of the main points raised on social media was concerns about what Adobe's content review processes mean for work that is under NDA (Confidentiality Agreement). Adobe says in its statement that for work stored in the cloud, Adobe “may use technologies and other processes, including escalation for manual (human) review, to detect certain types of illegal content.”
That may not completely resolve the privacy concerns of some Adobe users, although those issues arguably apply to the use of cloud storage in general, rather than Adobe specifically.
A crisis of confidence?
This Adobe incident is another example of how the aggressive expansion of cloud-based services and artificial intelligence tools is contributing to a crisis of trust between tech giants and software users (in some cases, understandably ).
On the one hand, the convenience of cloud storage has been a boon for creatives, particularly those with remote teams spread around the world, and AI tools like Generative Fill in Photoshop can also be a big time saver.
But they can also come at a cost, and it remains true that the only way to ensure true privacy is to store your work locally rather than in the cloud. For many Photoshop users, that won't be a problem, but the rage will no doubt continue as some look for the best Photoshop alternatives that don't have such a large cloud component.
As for AI tools, Adobe remains the self-proclaimed torchbearer for “ethical” AI that is not trained on copyrighted works, although it has sparked some controversies. For example, last month, the estate of legendary photographer Ansel Adams accused Adobe on Threads of selling AI-created imitations of his work.
To be fair to Adobe, it removed the work and stated that it “goes against our generative AI content policy.” But it shows again the delicate balancing act that companies like Adobe now find themselves in between deploying powerful new AI-powered tools and retaining the trust of both users and creatives.