[vc_row][vc_column]

[/vc_column][/vc_row]

Why A.I. Ought to Be Afraid of Us

Artificial intelligence is gradually catching up with ours. AI algorithms can now consistently beat us in chess, poker and multiplayer video games, create images of human faces indistinguishable from real ones, write news articles (not this one!) And even write love stories and drive cars better than most teenagers .

But AI isn’t perfect when Woebot is an indicator. Woebot, as Karen Brown wrote in the Science Times this week, is an AI-powered smartphone app that aims to offer low-cost advice and dialogue through the basic techniques of cognitive behavioral therapy. However, many psychologists question whether an AI algorithm can ever express the kind of empathy required for interpersonal therapy to work.

“These apps really cut down on the essential ingredient that much evidence shows that helps in therapy, which is the therapeutic relationship,” said Linda Michaels, a Chicago-based therapist and co-chair of the Psychotherapy Action Network, a professional group, said the Times.

Empathy is of course not a one-way street, and we humans don’t show much more of it for bots than bots for us. Numerous studies have shown that people placed in a situation where they can collaborate with a benevolent AI are less likely to do so than if the bot were a real person.

“Something seems to be missing from reciprocity,” said Ophelia Deroy, philosopher at the Ludwig Maximilians University in Munich. “In principle, we would treat a complete stranger better than AI”

In a recent study, Dr. Deroy and her neuroscientific colleagues try to understand why this is so. The researchers paired human subjects with invisible partners, sometimes humans and sometimes AI; Each couple then played a series of classic business games – trust, prisoner’s dilemma, chicken and deer hunting, and a game they developed called reciprocity – designed to measure and reward cooperation.

It is widely believed that our lack of reciprocity towards AI reflects a lack of trust. It is, after all, hyper-rational and callous, certainly only to itself, barely cooperating, so why should we? Dr. Deroy and her colleagues came to a different and perhaps less reassuring conclusion. Their study found that people were less likely to cooperate with a bot, even if the bot is interested in cooperating. It’s not that we don’t trust the bot, but we do: the bot is guaranteed benevolent, a capital S sucker, so we’re taking advantage of it.

This conclusion was confirmed by the subsequent discussions with the study participants. “Not only did they tend not to reciprocate the cooperative intentions of the artificial agents,” said Dr. Deroy, “but if they were basically abusing the bot’s trust, they were not reporting guilt while they were doing it on humans.” She added, “You can just ignore the bot and don’t feel like you’ve broken a mutual obligation.”

This could have an impact on the real world. When we think of AI, we often think of Alexas and Siris of our future world with whom we may have some kind of intimate relationship. But most of our interactions will be one-off, often wordless, encounters. Imagine you are driving on the motorway and a car tries to pull in in front of you. If you notice the car is driverless, you are much less likely to get in. And if the AI ​​doesn’t take your bad behavior into account, an accident could ensue.

“What supports cooperation in society on any scale is the establishment of certain standards,” said Dr. Deroy. “The social function of guilt is precisely to get people to follow social norms that lead them to compromise, to work with others. And we didn’t evolve to have social or moral norms for non-sentient creatures and bots. “

That is, of course, half the premise of “Westworld”. (To my surprise, Dr. Deroy had never heard of the HBO series.) But a guilt-free landscape could have ramifications, she noted, “We’re creatures of habit. So what guarantees that the behavior that is repetitive and where you show less courtesy, less moral obligation, less cooperation does not color and pollute the rest of your behavior when you interact with another person? “

There are similar consequences for AI. “When people treat them badly, they are programmed to learn from what they experience,” she said. “An AI that has been put on the street and programmed to be benevolent should start not being so friendly to people, otherwise it will get stuck in traffic forever.” (That is basically the other half of the premise of ” Westworld “.)

There we have it: The real Turing test is Road Rage. When a self-driving car starts honking wildly from behind because you cut it off, you know that humanity has reached the peak of achievement. By then, hopefully AI therapy will be mature enough to help driverless cars solve their anger management problems.

Comments are closed.