This way, we should avert the terminator scenario

Oct 15, 2014 11:37 GMT  ·  By

For the most part,”teaching” robots right from wrong has been synonymous with inputting a program and/or programming suite into whatever passes for their cortex. Though this has mostly been theoretical up to this point.

While there are some robots capable of operating with a certain degree of autonomy, they still follow certain software elements that are, in the end, quite narrow in scope. Basically, we're ages and ages away from true AI, if we ever achieve artificial intelligence at all that is.

Still, AI as it is today is sufficient to run a caretaker robot or a self-driving car, and should soon enough permit for more sophistication.

Then there are the awareness building methods, so to speak, which try to emulate the way humans learn through imitation, trial and error, and eventually once self-awareness has set in, inception.

The ethics of robots masquerading as humans

This is the main issue brought up in a research paper published in the Journal of Experimental & Theoretical Artificial Intelligence.

Basically, the question is whether or not it's evil for robots to pretend to be humans. Whether or not it makes them evil even.

It's a pretty premature query really, but we suppose someone was going to ask it sooner or later. It just turned out to be sooner.

The study also tries to find out whether ethical accountability should lie with the machines, us humans, or both. And when it's likely for this accountability to apply to one or the others.

A man by the name of Luciano Floridi came up with some theories on Information Ethics a while ago, saying that masquerading as a person makes it hard to tell a robot apart from a real human in given contexts. The masquerade ultimately causes impoverishment of being and a lack of good.

There's more to the theory but it boils down to whether or not we should really try to get robots to imitate humans as much as we seem to want.

In the new study we mentioned before, six robots acting as caretakers for patients were analyzed, though robot is a loosely used term since the Nanis Twitter bot and Apple's Siri AI phone assistant were included.

The study seems to be against humanoid robots and AIs

According to the writers of the paper, the machine increases the chances of evil by making it harder for individuals to make authentic ethical decisions, just by the simple act of masquerading. There doesn't seem to be anything there about teaching robots morals when they're not actively trying to imitate humans though.

Maybe they're thinking about this the wrong way. It's not like you have to be a human to be morally upright. You just need to be sentient. Morality usually comes as a result of sapience, if the person is brought up properly.