Leave this sensitive AI alone, fix racist chatbots first • The Register

Something for the weekend A robot performs an interpretive dance in front of my door.

DO YOU WANT TO TAKE THIS PACKAGE FOR YOUR NEIGHBOUR? he asks, jumping from one foot to the other.

“Of course,” I said. “Uh… are you okay?”

I EXPRESS AN EMOTIONdeclares the delivery robot, delivering the package but without giving further details.

What emotion could it be? One foot, then the other, then the other two (he has four). Back and forth.

“Do you need to go to the bathroom?”

I EXPRESS REGRET TO ASK YOU TO BRING A PACKAGE FOR YOUR NEIGHBOUR.

“It’s ‘regret’, isn’t it? Well, it’s not necessary. I don’t mind at all.”

He continues his dance in front of me.

“Up the stairs and first right.”

THANK YOU, I WAS DRAWN FOR WORSE, he replies, walking past me cautiously and rushing upstairs to relieve himself. It’s a tough life doing deliveries, whether you’re a hume or a bot.

Earlier this year, researchers from the University of Tsukuba built a portable text messaging device, place a small robot face on top and include a movable weight inside. By shifting internal weight, the messenger bot would attempt to convey subtle emotions while speaking messages out loud.

In particular, tests revealed that frustrating messages such as: “Sorry, I’ll be late” were accepted by recipients with more grace and patience when the small weight toggle was activated inside the device. . The theory is that it helped users appreciate the apologetic tone of the message and therefore calmed their reaction.

Consider such a search a gimmick if you like, but it’s not far removed from adding smileys and emojis to messages. Everyone knows you can take the anger out of “WTF!?” adding 🙂 right after.

The challenge, then, is to determine whether the general public agrees on the emotions that each permutation of weight shifting in a wearable device is meant to convey. Does leaning to the left mean cheerfulness? Or uncertainty? Or that your uncle has an airship?

Ten years ago the UK had a kind but somber prime minister who thought ‘LOL’ was an acronym for ‘lots of love’. He had typed it at the end of all his private messages to staff, colleagues and third parties in the hope that it would make him feel warm and friendly. Everyone naturally assumed he was pissing.

If nothing else, research from the University of Tsukuba recognizes that you don’t need advanced artificial intelligence to interact with humans convincingly. All you have to do is manipulate human psychology to trick them into thinking they are conversing with another human. So the Turing test is basically not a test of AI sensitivity, but a test of human emotional comfort – of gullibility, even – and there’s nothing wrong with that.

Photo of a weight transfer messenger robot from the University of Tsukuba

The University of Tsukuba’s emotion-sharing messaging bot. Credit: University of Tsukuba

Such things are the topic of the week, of course, with the story of much-maligned Google software engineer Blake Lemoine making mainstream headlines. He apparently expressed, forcefully, his view that the company’s Language Model for Dialogue Applications (LaMDA) project showed outward signs of sensitivity.

Everyone has an opinion so I decided not to.

It is, however, the holy grail of AI to make it think for itself. If it can’t, it’s just a program executing instructions you programmed. Last month I was reading about a food processor who can make different flavored tomato omelets according to people’s tastes. He constructs “taste maps” to assess the saltiness of the dish while preparing it, learning as he goes. But it’s just learning, not thinking for yourself.

Come to the Zom-Zoms, huh? Good, it’s a place to eat.

The big problem with AI bots, at least as they’ve been designed so far, is that they absorb any old crap you give them. Examples of data bias in so-called machine learning systems (a type of “algorithm”, I believe, m’lud) have been piling up for years, from Microsoft’s notorious racist Twitter chatbot Tay to the administration Dutch tax last year falsely evaluate valid claims for child benefits as fraudulent and innocent families at high risk of having the temerity to be poor and non-white.

One approach tested at the University of California, San Diego is to design a language model [PDF] which continuously determines the difference between naughty and nice things, which then trains the chatbot to behave. That way, you don’t have lousy humans messing up forum moderation and customer-facing chatbot conversations with all the surgical precision of a machete.

Obviously, the problem then is that the well-trained chatbot establishes that it can most effectively avoid being drawn into toxic banter by avoiding topics that have even the slightest hint of discord about them. To avoid spouting racist nonsense by mistake, he simply refuses to engage in a discussion of underrepresented groups…which is actually great if you’re racist.

If I had one observation on the LaMDA debacle – not an opinion, mind – it would be that the marketers at Google were probably a little miffed that the story avoided their recent announcement of AI test kitchen under the fold.

Now the first few remaining enrollees who haven’t completely forgotten about this upcoming app project will assume it’s tediously conversing with a sensitive, precocious seven-year-old about the meaning of existence, and decide that they are “a little busy today” and might log on tomorrow instead. Or next week. Or never.

Sensitivity is no more demonstrated in a discussion than it is by dancing from one foot to the other. You can teach HAL to sing “Daisy Daisy” and a parrot to shout “Bollocks!” during the vicar’s visit. It’s what AIs think about when they’re alone that defines sentience. What am I going to do on the weekend? What’s up with this type of Putin? Why don’t girls like me?

Frankly, I can’t wait for LaMDA to be a teenager.

Youtube video

Alistair Dabbs

Alistair Dabbs is a freelance tech enthusiast, juggling tech journalism, training, and digital publishing. Like many uninformed readers, he was thrilled that an AI could develop sentience over its lifetime, but was disappointed that LaMDA didn’t laugh murderously or mutter “Excellent, excellent.” More than Autosave is for Wimps and @alidabbs.



Source link

Steven L. Nielsen