Militaries' growing use of ground robots raises ethics concerns
If North Korea's dictator Kim Jong Un ever orders troops into the demilitarized zone, an army of South Korean robots could be waiting.
A Samsung subsidiary plans to deploy sentry robots to the tense South Korean border. The machines will be equipped with machine guns and cameras, thermal imaging and laser range finders capable of detecting intruders up to 2 1⁄2 miles away.
Samsung Techwin says the decision to fire must be made by a human in a remote bunker. Experts have suggested, however, that an operator could hack into the robot to enable it to make its own lethal decisions.
“If there has to be a decision, somebody has to turn on a trigger or put a key in for the lethal part,” said Alex Pazos, Samsung Techwin's director of application engineering in Latin America, where it uses unarmed versions of the surveillance robots.
The robots represent the cutting edge of cyber technologies that increasingly give machines control over life-or-death decisions. For now, the robots are adept at making stark choices in places such as the Korean demilitarized zone, where no people are allowed.
Though unmanned drones in the sky have drawn a lot of attention, a Tribune-Review investigation finds that ground-based droids — the real-world descendants of Hollywood sci-fi movies — are becoming smarter and deadlier, pushing the line at which ethical questions must be resolved. The Army has more than 7,000 less-sophisticated ground robotics systems for missions such as reconnaissance and bomb detection and removal.
“There are moral, ethical reasons to not delegate the authority to kill people to machines,” said Peter Asaro, co-founder of the International Committee for Robot Arms Control, an international nonprofit opposed to military robots. “Just because you can mathematically distinguish civilians and combatants, that doesn't tell you it's appropriate in the situation to use lethal force, even against a lawful combatant.”
Navigating mapped areas, such as a factory floor, a robot can make rudimentary sense of changes and alert humans to an unauthorized visitor. They do not fare as well on uncertain terrain or at distinguishing foes among friends.
It's a huge leap then to setting them free in the wildly chaotic human world, said Jim Gunderson, the founding chief technology officer of Vigilant Robots, a Denver startup that makes unarmed sentry robots.
“I know how smart these things are, which means I also know how dumb they are,” he said. “The whole ‘Terminator' thing turns a lot of people off: We put weapons in the hands of the robots, and then the robots decide they don't need us anymore.”
The Pentagon has tried to stay in step with killer robots.
A Department of Defense directive issued in November requires special approval for robot systems that kill without human supervision. It calls for manufacturers to minimize chances that robots could engage in unintended attacks — or fall under the control of hackers.
“The intent of this directive is to get out ahead of technology,” said Lt. Col. Jim Gregory, Defense Department spokesman. “It is not motivated by any particular event or specific capability, but rather an appreciation that technology is advancing such that the ability to employ more autonomous systems will only increase over time.”
Although the military does not have machines that can go out and kill on their own, it has autonomous robots that can initiate nonlethal attacks or defend ships and troops.
The Miniature Air Launched Decoy-Jammer, for example, flies a pre-programmed mission with an electronic weapon designed to disrupt enemy radar.
Raytheon's Phalanx system — nicknamed R2-D2 for the “Star Wars” character it resembles — is on 145 Navy ships and can continuously scan the skies and water for incoming objects. Once turned on, the system can detect an object at 10 miles, identify it as an incoming enemy attack at five miles and destroy it at two miles. On its own, Phalanx can fire as many as 4,500 rounds of 20mm tungsten bullets per minute.
The naval version has never fired at an enemy, but a land-based Phalanx system in Iraq stopped 177 incoming attacks, with no reported mishaps.
“Who knows how many lives it saved?” said John Eagles, Raytheon spokesman.
The Pentagon does not anticipate a need for autonomous killer robots, spokeswoman Maureen Schumann said.
“The robot as a co-member of a squad, akin to the relationship soldiers have with their working dogs, is a fair comparison,” Schumann said.
Science or fiction?
On a practical level, robots are not ready for more autonomous missions among humans, many experts say.
Machines that drive themselves in urban areas must respond to traffic signals while watching for pedestrians, animals and potholes. Off road, they have to distinguish bushes from boulders and avoid objects hidden in tall grass or leaves.
Military robots face the additional burden of defeating hazards, such as roadside bombs.
“If you say, ‘I want a ground vehicle that can go anywhere and drive safely,' there are an awful lot of ‘anywheres' if you add them up,” said Tony Stentz, director of Carnegie Mellon University's National Robotics Engineering Center in Lawrenceville.
Lethal robots cannot be made infallible, said Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo.
“In the future, we may see robots of a more general purpose and that are mobile as well as lethal,” he said. “That's a dangerous path because we probably can't trust that we've designed such robots well enough for the full range of unforeseen situations.”
Then there are the moral reasons against autonomous robots, ethicists said.
Technology has grown rapidly, yet few humanitarians have considered whether robots should be allowed to kill on their own, the United Nations warned in a 2010 report. It faulted the human rights community for being “remarkably slow in coming to grips with the implications of new technologies.”
Human Rights Watch, a New York-based nonprofit, issued a report last year condemning military robots for threatening civilians and complicating accountability. If a damaged robot accidentally blows up a house, prosecutors would not know whether to blame the manufacturer, human supervisors or the machine.
Members of the Campaign to Stop Killer Robots, an international coalition of nonprofits, met in London last month to press for a ban on fully autonomous weapons. In cases when a human life can be taken, such as war, self-defense and capital punishment, human judgment is integral, said Asaro of the Robot Arms Control group.
Hollywood has helped spread the fear of armed robots taking over, as in movies such as the “Terminator” and “I, Robot.”
Humans have worried about such a development since at least 1920, when a Czech playwright first used the word “robot” in a play about artificial humans who turn on their creators.
Researchers at Cambridge University founded the Centre for the Study of Existential Risk to consider how technologies such as artificial intelligence could lead to human extinction.
“The first step is to get experts together to decide whether it's credible or will forever be (science fiction),” said co-founder Martin Rees, emeritus professor of Cosmology & Astrophysics.
The ethics of killer robots
The technology to develop thinking, armed robots might not be something anybody should want, said George R. Lucas Jr., a professor of ethics and public policy at the U.S. Naval Postgraduate School in Monterey, Calif., and no relation to “Star Wars” creator George Lucas.
“We're not going to have C-3PO or R2-D2 fully weaponized and roaming the hills of southern Afghanistan,” Lucas said. “I don't think that is in the near future because I don't even think that's desirable.”
The biggest debate among robotics ethicists remains whether machines can be taught morality, or when to kill. Ronald Arkin, a robot ethics professor at the Georgia Institute of Technology in Atlanta, believes it might be possible to develop computational morality to answer complicated questions.
“Robots can certainly be programmed to make life-or-death decisions,” Arkin said. “The question is whether they can make ethically correct decisions — and perhaps even more correct than human war-fighters.”
In those certain stark scenarios, machines might be close: Samsung Techwin has sold unarmed robots in Peru for port security, and it talked with officials in the United States and Panama about using them at airports. It plans to deploy armed robots in the Korean demilitarized zone, spokesman Ji Sun Lee said.
The robots can cover a wider area than a human, operate in almost any weather and use sophisticated sensors to detect an infiltrator, said South Korean military spokesman Lt. Col. Kang Moon Ho. A soldier must decide whether to fire the robot, he said, declining to say how the robots are being deployed or to give their cost because the information is confidential.
Robots operate tirelessly and without fear or a desire for revenge, Lucas said, meaning they might cause fewer mistakes than humans in certain situations:
“If the robot sentry is more reliable because it doesn't care about itself getting injured or killed, is more careful to call out a warning and signify that it intends to use deadly force unless the target ceases and desists and exits, and is less likely than a Korean soldier to shoot the wrong person at the border, then I think that's a good thing.”
Andrew Conte is a Trib Total Media staff writer. Reach him at 412-320-7835 or firstname.lastname@example.org.