ShareThis Page
Home

Futurists' report reviews dangers of smart robots

| Monday, Nov. 2, 2009

Scientists are preparing to publish a report this month that examines, in part, whether robots could eventually become so smart they pose a threat to society.

The report will include concerns some researchers have voiced about the legal and ethical use of artificial intelligence.

Most computer scientists don't subscribe to radical visions of a robot-dominated future, said Eric Horvitz, a Microsoft researcher in Redmond, Wash., who called the group together to write the report.

But at least one scientist believes intelligent machines could pose threats to human beings. Another, Carnegie Mellon University's Tom Mitchell, said most alarming is what people might do with computers that are based on artificial intelligence.

"If there are concerns, we have quite a bit of time to address them," said Horvitz, the immediate past president of the Association for the Advancement of Artificial Intelligence based in California.

"These technological advancements happen incrementally, and human beings are the authors," Horvitz said. "You don't start playing with kites and suddenly find a 747 sitting in your backyard."

It is the first time AAAI scientists have come together to discuss artificial intelligence's potential positive and negative impacts on society, Horvitz said.

The report will outline the state of artificial intelligence research and probable outcomes — and attempt to dispel science fiction-inspired myths of world conquest by intelligent machines.

The group, AAAI Presidential Panel on Long-Term AI Futures, includes Pittsburgh-based researchers. An interim report the group published in August included the possibility of investing more research into making "machine learning and reasoning more transparent to people."

Mitchell, the head of Carnegie Mellon's machine learning department in the School of Computer Science, agreed that no danger exists of robots taking over the world, particularly with current technology.

"But a realistic concern is the prospect of computer viruses becoming intelligent," he said.

An intelligent virus with speech-recognition abilities could hide in someone's laptop, desktop or cell phone and listen to conversations, Mitchell said.

"Government officials might want to listen to millions of people to find out who's talking about collecting bomb-making materials," he said. "Criminals might want to simply get your credit card information so they can use it. Planting that type of virus on cell phones would be the easiest way to do it."

David McAllester, a group member and computer scientist who is a professor and chief academic officer at the Toyota Technological Institute at Chicago, believes it's inevitable that fully automated intelligent machines will be able to design and build smarter, better versions of themselves. He acknowledges he's in the minority among his peers.

He estimates a 10 percent chance of that happening within 25 years and a 90 percent chance of it occurring within 75 years.

Scientists describe that concept as "the intelligence explosion," or "the Singularity."

Once it happens, machines would become infinitely intelligent, ever-increasing in their capabilities, said McAllester, who earned his doctorate at the Massachusetts Institute of Technology.

"It's an incredibly dangerous scenario."

"I don't see any of that happening," said Manuela Veloso, a professor at Carnegie Mellon's Robotics Institute. "It's actually the opposite. I wish more were willing to see the limitations of these machines. People will find that robots are extremely bounded by what they can do."

People must realize the difference between the perceived and actual ability of today's robots, said Veloso, who developed robots that can play soccer.

"But they can't do anything else," Veloso said. "They're very good at playing soccer. But they're limited to that."

Profound difficulties exist for today's robots, she said, including poor object recognition, a lack of humanlike hands, difficulty in attaching enough sensors so the robot can operate in the real world and lack of software that can authentically replicate how humans learn.

"I'm an engineer," Veloso said. "I don't do research on what robots will and won't do. I'm not philosophical. I'm very realistic. I'm still curious on what robots can do. I want robots to be one more source of help — making you dinner, driving your car. That's my vision."

Horvitz said some members of the group focused on ethical and legal issues likely to arise as robots become more enmeshed within society. The August report cited "autonomous systems that might one day be charged with making (or advising people on) high-stakes decisions, such as medical therapy or the targeting of weapons."

"If you're a young student in medicine, you have a code of ethics that tells you what's right and wrong. As a young roboticist, you don't have that code," said author P.W. Singer, director of the 21st Century Defense Initiative at The Brookings Institute in Washington.

Addressing the ethical and legal questions is critical, said Singer, who wrote this year's book, "Wired for War." Robots represent a genuine revolution in human history, akin to the printing press, gunpowder and the atomic bomb, Singer said. The world is going to change in ways people can't predict -- as it did after those inventions, he said.

The U.S. military uses robots to kill people, so it's too late to enact a universal law as in Isaac Asimov's fiction that prohibits robots from harming people, Singer said.

Legal issues exist in how law enforcement uses robots, Singer said. The city of Houston uses drones for observation. That raises constitutional questions of privacy and probable cause, he said.

"Scientists engaging in the public debate is a very important thing," Singer said.

TribLIVE commenting policy

You are solely responsible for your comments and by using TribLive.com you agree to our Terms of Service.

We moderate comments. Our goal is to provide substantive commentary for a general readership. By screening submissions, we provide a space where readers can share intelligent and informed commentary that enhances the quality of our news and information.

While most comments will be posted if they are on-topic and not abusive, moderating decisions are subjective. We will make them as carefully and consistently as we can. Because of the volume of reader comments, we cannot review individual moderation decisions with readers.

We value thoughtful comments representing a range of views that make their point quickly and politely. We make an effort to protect discussions from repeated comments either by the same reader or different readers

We follow the same standards for taste as the daily newspaper. A few things we won't tolerate: personal attacks, obscenity, vulgarity, profanity (including expletives and letters followed by dashes), commercial promotion, impersonations, incoherence, proselytizing and SHOUTING. Don't include URLs to Web sites.

We do not edit comments. They are either approved or deleted. We reserve the right to edit a comment that is quoted or excerpted in an article. In this case, we may fix spelling and punctuation.

We welcome strong opinions and criticism of our work, but we don't want comments to become bogged down with discussions of our policies and we will moderate accordingly.

We appreciate it when readers and people quoted in articles or blog posts point out errors of fact or emphasis and will investigate all assertions. But these suggestions should be sent via e-mail. To avoid distracting other readers, we won't publish comments that suggest a correction. Instead, corrections will be made in a blog post or in an article.

click me