The director of OpenAi's model and behavior policy, Joanne Jang, has written a blog post about X about Human-Ai relations, offering some well-considered ideas on the subject and how OpenAi addresses the problems that surround it. Essentially, as AI models improve in imitating life and conversation, people begin to treat the chatbots of AI as they are people. It makes sense that Openai wants to make it clear that they are aware and that they are incorporating the facts in their plans.
But the reflective and nuanced approach, including the design of models that feel useful and friendly, but not sensitive, lose something crucial. No matter how clear and careful jang try to be, people who have emotional connections with AI, an occasional atypical event or a hypothetical future, is now happening, and seems to be happening a lot.
It may have been taken by surprise, since the CEO Sam Altman has commented on being surprised by the number of anthropomorphic people and how users deeply claim to connect with the models. He has even recognized emotional attraction and potential risks. That is why the Jang publication exists.
She makes it clear that Operai is building models to serve people and are prioritizing the emotional side of that equation. They are investigating how and why people form emotional links for AI and what it means to shape future models. She makes a point of distinguishing between ontological consciousness, as in the real consciousness that humans have and perceive consciousness, if it seems aware of users. The perceived consciousness is what matters for now, since that is what affects the people who interact with AI. The company is trying to thread a needle of behavior that makes the AI look warm and useful without pretending that it has feelings or a soul.
However, clinically compassionate language could not disguise an obvious missing element. He felt how to see someone put a caution: sign of the wet floor and boast of the plans for waterproof buildings a week after an flood left the floor of the knee deeply in the water.
The elegant frame and the cautious optimism of the blog publication and its focus on the creation of responsible models based on long -term research and cultural conditioning cease to be the messy reality of how people develop deep connections with AI chatbots, including chatgpt. Many people not only talk to Chatgpt as if it were software, but as if it were a person. Some even claim to have fallen in love with an AI partner, or use it to replace human connections completely.
Intimacy of AI
There are Reddit threads, medium rehearsals and viral videos of people who whisper nothing to their favorite chatbot. It can be fun or sad or even enraged, but what is not theoretical. The demands about whether the chatbots of AI contributed to suicides are ongoing, and more than one person has reported that they trusted AI to the point that it has become more difficult to form real relationships.
Operai points out that the constant and free attention of a model may seem company. And they admit that shaping the tone and personality of a chatbot can affect how emotionally living it feels, with upward bets for absorbed users in these relationships. But the tone of the piece is too separate and academic to recognize the potential scale of the problem.
Because with the toothpaste of the intimacy of the AI already outside the tube, this is a matter of behavior of the real world and how the companies behind the AI that forms that behavior respond at this time, not only in the future. Ideally, they would already have established systems for dependency detection. If someone spends hours a day with Chatgpt, speaking as if it were their partner, the system should gently mark that behavior and suggest a break.
And romantic connections need some hard limits. Not prohibiting it, that would be dumb and probably counterproductive. But the strict rules that any AI dedicated to a romantic roles game has to remember people who are talking to a bot, one that is not really alive or conscious. Humans are projection teachers, and a model does not have to be flirting for the user to fall in love with it, of course, but any indication of conversation in that direction should trigger those protocols, and should be very strict when it comes to children.
The same goes for the AI models as a whole. Chatgpt's occasional reminders say: “Hello, I am not a real person,” they may feel uncomfortable, but possibly they are necessary in some cases and a good prophylactic in general. It is not the fault of users that anthropomorphic people. Googly eyes in Roombs and harden our vehicles with names and personalities is not considered more than slightly extravagant. It is not surprising that a tool as receptive and verbal as chatgpt can begin to feel like a friend, a therapist or even a partner. The point is that companies like OpenAi have the responsibility to anticipate this and design it, and they should have from the beginning.
It could be argued that adding all these railings ruins the fun. People should be allowed as they want, and that the artificial company can be a balm for loneliness. And that is true in moderate doses. But children's parks have fences and roller mountains have safety belts for a reason. The AI capable of imitating and causing emotions without security controls is simply negligent.
I am glad that Operai is thinking about this, I would only want them to have done it before, or now they had more urgency about it. The design of the product of AI must reflect the reality that people are already in AI relations, and those relationships need more than reflective essays to remain healthy.