Futurists' report reviews dangers of smart robots
Scientists are preparing to publish a report this month that examines, in part, whether robots could eventually become so smart they pose a threat to society.
The report will include concerns some researchers have voiced about the legal and ethical use of artificial intelligence.
Most computer scientists don't subscribe to radical visions of a robot-dominated future, said Eric Horvitz, a Microsoft researcher in Redmond, Wash., who called the group together to write the report.
But at least one scientist believes intelligent machines could pose threats to human beings. Another, Carnegie Mellon University's Tom Mitchell, said most alarming is what people might do with computers that are based on artificial intelligence.
"If there are concerns, we have quite a bit of time to address them," said Horvitz, the immediate past president of the Association for the Advancement of Artificial Intelligence based in California.
"These technological advancements happen incrementally, and human beings are the authors," Horvitz said. "You don't start playing with kites and suddenly find a 747 sitting in your backyard."
It is the first time AAAI scientists have come together to discuss artificial intelligence's potential positive and negative impacts on society, Horvitz said.
The report will outline the state of artificial intelligence research and probable outcomes — and attempt to dispel science fiction-inspired myths of world conquest by intelligent machines.
The group, AAAI Presidential Panel on Long-Term AI Futures, includes Pittsburgh-based researchers. An interim report the group published in August included the possibility of investing more research into making "machine learning and reasoning more transparent to people."
Mitchell, the head of Carnegie Mellon's machine learning department in the School of Computer Science, agreed that no danger exists of robots taking over the world, particularly with current technology.
"But a realistic concern is the prospect of computer viruses becoming intelligent," he said.
An intelligent virus with speech-recognition abilities could hide in someone's laptop, desktop or cell phone and listen to conversations, Mitchell said.
"Government officials might want to listen to millions of people to find out who's talking about collecting bomb-making materials," he said. "Criminals might want to simply get your credit card information so they can use it. Planting that type of virus on cell phones would be the easiest way to do it."
David McAllester, a group member and computer scientist who is a professor and chief academic officer at the Toyota Technological Institute at Chicago, believes it's inevitable that fully automated intelligent machines will be able to design and build smarter, better versions of themselves. He acknowledges he's in the minority among his peers.
He estimates a 10 percent chance of that happening within 25 years and a 90 percent chance of it occurring within 75 years.
Scientists describe that concept as "the intelligence explosion," or "the Singularity."
Once it happens, machines would become infinitely intelligent, ever-increasing in their capabilities, said McAllester, who earned his doctorate at the Massachusetts Institute of Technology.
"It's an incredibly dangerous scenario."
"I don't see any of that happening," said Manuela Veloso, a professor at Carnegie Mellon's Robotics Institute. "It's actually the opposite. I wish more were willing to see the limitations of these machines. People will find that robots are extremely bounded by what they can do."
People must realize the difference between the perceived and actual ability of today's robots, said Veloso, who developed robots that can play soccer.
"But they can't do anything else," Veloso said. "They're very good at playing soccer. But they're limited to that."
Profound difficulties exist for today's robots, she said, including poor object recognition, a lack of humanlike hands, difficulty in attaching enough sensors so the robot can operate in the real world and lack of software that can authentically replicate how humans learn.
"I'm an engineer," Veloso said. "I don't do research on what robots will and won't do. I'm not philosophical. I'm very realistic. I'm still curious on what robots can do. I want robots to be one more source of help — making you dinner, driving your car. That's my vision."
Horvitz said some members of the group focused on ethical and legal issues likely to arise as robots become more enmeshed within society. The August report cited "autonomous systems that might one day be charged with making (or advising people on) high-stakes decisions, such as medical therapy or the targeting of weapons."
"If you're a young student in medicine, you have a code of ethics that tells you what's right and wrong. As a young roboticist, you don't have that code," said author P.W. Singer, director of the 21st Century Defense Initiative at The Brookings Institute in Washington.
Addressing the ethical and legal questions is critical, said Singer, who wrote this year's book, "Wired for War." Robots represent a genuine revolution in human history, akin to the printing press, gunpowder and the atomic bomb, Singer said. The world is going to change in ways people can't predict -- as it did after those inventions, he said.
The U.S. military uses robots to kill people, so it's too late to enact a universal law as in Isaac Asimov's fiction that prohibits robots from harming people, Singer said.
Legal issues exist in how law enforcement uses robots, Singer said. The city of Houston uses drones for observation. That raises constitutional questions of privacy and probable cause, he said.
"Scientists engaging in the public debate is a very important thing," Singer said.