Chatbot vs chatbot: Researchers train AI chatbots to hack each other, and they can even do it automatically

AI chatbots typically have security measures in place to prevent them from being used maliciously. This may include banning certain words or phrases or restricting responses to certain queries.

However, researchers have now claimed to have been able to train AI chatbots to “jailbreak” each other to bypass safeguards and return malicious queries.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top