ShareThis Page
Technology

Chatbot developed at Carnegie Mellon uses humans to answer questions AIs can't

Aaron Aupperlee
| Thursday, Feb. 8, 2018, 3:48 p.m.
A conversation with Evorus about what kind of pet I should get.
Aaron Aupperlee
A conversation with Evorus about what kind of pet I should get.

A digital assistant that taps real people to answer questions just got smarter thanks to artificial intelligence.

The team at Carnegie Mellon University that developed Chorus, a chatbot that uses the power of the crowd to answer questions, added AI-powered systems similar to Siri and Alexa to the mix to create Evorus.

“It's sort of the ever-learning, evolution of Chorus,” said Jeff Bigham, an associate professor in CMU's Human-Computer Interaction Institute. “The reason we made this is that we were kind of underwhelmed by the existing digital assistants out there. Not because they aren't amazing technological achievements but because they are rather narrow.”

Bigham said devices like Amazon's Alexa, Microsoft's Cortana, Google's Hey Google and Apple's Siri are good for answering simple single questions. Complex questions or even follow-up question often stump the devices. Bigham's team created Chorus to demonstrate how a crowd of people could answer more complex questions with surprising speed.

Bigham doesn't look at it as humans versus computers or see Chorus and Evorus as pleas for the survival of people-powered intelligence. Artificial intelligence is coming and the best approach is to figure out what it does best, what humans do best and how they can work together.

“Leveraging the best of what AI and humans have to bring,” Bigham said. “For the first time we're able to hit that middle spot where it is leveraging both automated systems and human systems.”

Chorus makes use of Amazon's Mechanical Turk, a network of people who do on-demand work. When someone asks Chorus a question, it submits the question to the network and people begin submitting answers that are voted on until the best answer is selected and sent. Answers take only a few minutes and the people who worked on the query are paid. The average chorus session costs about $2.48.

Evorus adds AI and machine learning to the process. The system will now remember previous questions and answers and start to suggest answers before it asks people for help. Evorus will also help select the best answers with less human involvement.

During a five-month test with 80 people asking 181 questions, automated answers were chosen 12 percent of the time and human voting on answers dropped by 14 percent. The cost to reply to each message dropped by 33 percent.

“You can really start to see how we could achieve both the reliability of the human workers and the low-latency and the low cost of the chatbots,” Bigham said.

Brigham said most digital assistants don't know how to answer a question like, “What sort of pet should I get?” Siri pulls up a list of websites about the question. Evorus responded with questions asking if the person lives in the city, has a lot of room, is gone often on vacation or travels for work. It asked if the person has a backyard and if the person has ever owned a pet. Through these questions and more answers, the chatbot identifies the appropriate pet. In my case, it was a dog.

I also asked Evorus what I should wear Wednesday as the weather in Pittsburgh shifted from rain to snow to rain. Siri told me the weather when I asked her. Evorus asked what the weather was like and started suggesting warm clothing options.

“Can you wear jeans to work? How about a sweater and some warm pants?” Evorus responded. “A leather jacket would be suitable for snowy conditions. A shirt layered with a long-sleeved shirt and casual slacks would work nicely.”

Evorus uses Google Hangouts to communicate. People can try it for free by signing up at http://talkingtothecrowd.org/ . The chatbot is far from perfect, so expect bugs and hiccups.

Aaron Aupperlee is a Tribune-Review staff writer. Reach him at aaupperlee@tribweb.com, 412-336-8448 or via Twitter @tinynotebook.

TribLIVE commenting policy

You are solely responsible for your comments and by using TribLive.com you agree to our Terms of Service.

We moderate comments. Our goal is to provide substantive commentary for a general readership. By screening submissions, we provide a space where readers can share intelligent and informed commentary that enhances the quality of our news and information.

While most comments will be posted if they are on-topic and not abusive, moderating decisions are subjective. We will make them as carefully and consistently as we can. Because of the volume of reader comments, we cannot review individual moderation decisions with readers.

We value thoughtful comments representing a range of views that make their point quickly and politely. We make an effort to protect discussions from repeated comments either by the same reader or different readers

We follow the same standards for taste as the daily newspaper. A few things we won't tolerate: personal attacks, obscenity, vulgarity, profanity (including expletives and letters followed by dashes), commercial promotion, impersonations, incoherence, proselytizing and SHOUTING. Don't include URLs to Web sites.

We do not edit comments. They are either approved or deleted. We reserve the right to edit a comment that is quoted or excerpted in an article. In this case, we may fix spelling and punctuation.

We welcome strong opinions and criticism of our work, but we don't want comments to become bogged down with discussions of our policies and we will moderate accordingly.

We appreciate it when readers and people quoted in articles or blog posts point out errors of fact or emphasis and will investigate all assertions. But these suggestions should be sent via e-mail. To avoid distracting other readers, we won't publish comments that suggest a correction. Instead, corrections will be made in a blog post or in an article.

click me