Even though the concept was fictional, active measures are being taken against it

May 9, 2014 13:44 GMT  ·  By

Pretty much everyone who knows what motion pictures are will know about the Terminator series, or have heard at least one reference to it. As it so happens, the US military is actively trying to prevent robotic apocalypses.

Or, well, they have started to research robot morality, in order to ensure that things like Skynet never come to pass.

The first order of business will be to figure out exactly what human morality is.

The second order of business will be devising computing algorithms that can give autonomous robots the moral values decided upon in the first phase.

All told, this is a very proactive approach on the part of the US Department of Defense.

Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI) have teamed up for this project, and are using funds from the Office of Naval Research (ONR), a wing of the DoD that deals with military research and development.

Before you freak out, don't worry. There are no artificial intelligences of human design yet, none that I know of anyway.

That doesn't mean this new move on the DoD's part is premature, though. Luck favors the prepared after all, and since there already are some bots that can function independently, sort of, it makes sense to preempt their (possible) unfortunate reactions in the event that they become self-aware and have the equivalent of a robotic freak-out.

The remaining question is this: how will we teach robots to be morally upright when it comes to things that even we humans haven't quite figured out, like the difference between murder and capital punishment?