OpenAI's ChatGPT has long been 'dumb', willing to help in cybercrime, an analogy of Icarus for age and a threat to confidential company data.
However, it seems we need to re-examine all of this, as reports are emerging that the artificial intelligence tool is once again leaking passwords inside, just for a change, corporate support tickets.
By Ars TechniqueChatGPT recently provided a user with chat logs from a pharmaceutical company's support system, consisting of another user's error report related to a user portal… that contained that user's credentials.
Cleaning in GPT hallway
“I went to make a consultation […] and when I logged back in moments later, I noticed the additional conversations. They weren't there when I used ChatGPT last night. […] “No queries were made, they just appeared in my history and they certainly aren't mine (and I don't think they're from the same user either),” Ars Technica reader Chase Whiteside told the publication.
Whiteside was also able to obtain “the name of a presentation someone was working on,” “details of an unpublished research proposal,” and, one for TechRadar Pro's real anorak: a script “written” in PHP. After all, probably stolen from a public Github repository.
What's interesting/grim (delete depending on the worldview) is that, even though ChatGPT doesn't have anything close to a spotless track record, Whiteside says they're “a pretty heavy user” of the service and show no signs of this incident, or any ChatGPT-shaped incident that we have reported on in the last year or whatever, has given you pause. Ladies, gentlemen and indefinite: a dependency in action.
Analysis: I want to throw the intangible concept of AI into a vat of acid, can you help me?
Look, at TechRadar Pro we are very specialized, we know it. You read us, so you already know that 'artificial intelligence' is not a sentient computer, and is simply a billionaire force-feeding copyrighted digital works into a CPU to create a corpus that will be spit out to the masses. I eat orange seeds afterwards. the Heimlich maneuver.
If you put monkeys in a room full of computers, they will eventually just control-c + control-v the complete works of Shakespeare. Or, rather, the value of half an act of Romeo and Juliet before telling you that for longer quick responses you have to pay $20 a month.
Computers don't really have context, you're not actually talking to a human person. You know this and more, you're great, but no one else knows, or no one else cares about the damage 'AI' is causing, or maybe a little of both.
It seems like every week the average age of a person asking me 'what is AI' increases. I don't understand it, but I use it daily, so I want to have it' increases, while my success rate in communicating what it is, the speculative cynicism behind it. , and the resulting damage it causes, realistically speaking, decreases. I reliably come across 'I don't know, it makes my life easier' or some variation thereof.
“Just a cheeky little leak of user data, dude,” is increasingly the stance of your mom who posts a status on Facebook saying she does NOT give Mark Zuckerberg permission to use her private posts or personal data before posting. a blurry closure. Photographs of a new mole on her thigh that worries her. That's probably why we have the governments we have.
I have the sneakiest feeling I've written about blind technological submission before, so what am I saying? In this ancient world of apple piety medicine, tell them the truth: that AI chatbots can threaten, swearand be racist (a not-safe-for-work but absolutely accurate headline from The Verge, there).
Don't tell them this is just a machine learning from their abysmal behavior and reflecting it back to them, even if it is. Literally, in the case of Whiteside, who caught our anonymous bug reporter exclaiming “this is crazy” to a living, breathing customer service agent.
Tell them to think about the children. It will be banned within fifteen days.