That's the vision of the military

Feb 9, 2010 09:50 GMT  ·  By

Undoubtedly, one of the major drawbacks of wars is the fact that people die in them. Military analysts and strategists have tried for a long time to reduce the number of casualties by as much as possible, but the best they could come up with until now was flying airplanes high in the sky, and striking their targets from afar. Now, a new vision has it that robots should be used on the battlefield, in a way that would make matters more civilized and “well—educated.” More than 8,000 robots are currently deployed by the US Army in Iraq and Afghanistan, the BBC News reports.

At this point, unmanned aerial vehicles (UAV) can wreck havoc on the ground, but they are the only widely used machines that have lethal capabilities. Most robots on the ground don't have this luxury, and are confined to defusing roadside bombs, removing land mines, and carrying soldiers' equipment. The future could see more emphasis being placed on armed robots, and even driverless vehicles, outfitted with various types of weaponry. “The closer you are to being shot, the more you understand the value of having a remote weapons capability,” Bob Quinn, from the US subsidiary of the UK arm manufacturer QinetiQ, says.

Addressing concerns that even military commanders have, of having independent killing machines roaming freely, the expert says that even robot manufacturers realize the importance of creating machines that only operate under the control of soldiers, and never independently. But some, such as author Peter Singer, say that the speed of modern warfare may simply be too great to allow for a direct human control. He reveals that, even now, there are weapon systems in which people do not have the control power, but only the veto power.

He gives the example of automated counter-artillery batteries that are currently stationed in Afghanistan. “The human reaction time when there's an incoming canon shell is basically we can get to mid-curse word… [This] system reacts and shoots it down in mid-air. We are in the loop. We can turn the system off, we can turn it on, but our power really isn't true decision-making power. It's veto power now,” he says. There are also concerns about a machine's ability to distinguish a friend from a foe, or an enemy from non-combatant personnel.

“If there's an area of fighting that's so intense that you can assume that anyone there is a combatant, then unleash the robots in that kind of scenario. Some people call that a kill box. Any target [in a kill box] is assumed to be a legitimate target,” US academic Patrick Lin, who has been recently contracted by the US military to study robot ethics, says. Robotic Technology Inc. expert, Dr. Robert Finkelstein has another solution. “ Robots that are programmed properly are less likely to make errors and kill non-combatants, innocent people, because they're not emotional, they won't be afraid, act irresponsibly in some situations.”