ShareThis Page
Allegheny

Pittsburgh tech firm searches for bugs in self-driving software

Aaron Aupperlee
| Monday, Jan. 23, 2017, 4:33 p.m.
Lead Engineer Ryan McNulty works at his desk inside of Edge Case Research's offices in Lawrenceville on Monday, Jan. 23, 2017.
Nate Smallwood | Tribune-Review
Lead Engineer Ryan McNulty works at his desk inside of Edge Case Research's offices in Lawrenceville on Monday, Jan. 23, 2017.
Edge Case Research CEO Michael Wagner poses for a portrait inside of their offices in Lawrenceville on Monday, Jan. 23, 2017.
Nate Smallwood | Tribune-Review
Edge Case Research CEO Michael Wagner poses for a portrait inside of their offices in Lawrenceville on Monday, Jan. 23, 2017.
A screen shows a simulation of a car driving in the offices of Edge Case Research in Lawrenceville on Monday, Jan. 23, 2017.
Nate Smallwood | Tribune-Review
A screen shows a simulation of a car driving in the offices of Edge Case Research in Lawrenceville on Monday, Jan. 23, 2017.
Edge Case Research CEO Michael Wagner poses for a portrait inside of their offices in Lawrenceville on Monday, Jan. 23, 2017.
Nate Smallwood | Tribune-Review
Edge Case Research CEO Michael Wagner poses for a portrait inside of their offices in Lawrenceville on Monday, Jan. 23, 2017.

Automobiles have millions of lines of computer code running everything from dashboard displays to throttle controls.

Make that car autonomous, and the computing complexity could multiply by 100.

But it still would take only one bug in the software or a bad line of code to potentially make the system go haywire, said Mike Wagner, co-founder of Edge Case Research, a Pittsburgh company that tests and simulates computer software to identify and fix bugs and other weaknesses.

Wagner, however, is concerned that not all the companies working to take our hands off the wheel are paying close attention to their software.

“No, they are not yet doing this, and yes, they need to be doing it,” Wagner said of the companies actively developing and testing autonomous vehicles.

Edge Case Research, a Lawrenceville company of about 10 people, works with some companies developing autonomous vehicles and uses automated robot assessment tools to test the robustness of the software powering self-driving cars. The company simulates failing sensors or cameras to test how the software reacts. It feeds unexpected or unnatural data into the system, such as a black pixel in an image where one shouldn't be or a speed of negative infinity, to see what the car does.

“You get right at things you're not going to find on the test track,” Wagner said. “The space of possible behaviors and the ways that the logic could execute, you want to have the computer pull apart your code.”

Edge Case has about 25 clients and has been getting more work with autonomous vehicles heating up. The company is working with the Army on autonomous technology for convoying trucks. Other clients are in defense, automotive, finance and the Internet of Things, Wagner said.

Bad software in Toyotas caused the cars to suddenly accelerate, said Phil Koopman, co-founder of Edge Case with Wagner and an expert witness in Toyota legal proceedings. Toyota recalled millions of vehicles, faced hundreds of wrongful-death and personal injury lawsuits and paid a $1.2 billion fine in 2014 in a settlement with the U.S. Department of Justice.

Bugs caused problems with military fighter jets crossing the international dateline, computers handling leap years and sensors on a rocket engine, Wagner said. He attributed the hacking of a Jeep in 2015 to faulty software. Charlie Miller and Chris Valasek, the pair who hacked the Jeep, eventually were hired by Uber.

Google just started testing for bugs across all of its open source software, a sign that major companies are beginning to acknowledge robustness testing, Wagner said.

“We definitely have a cultural disconnect. The folks in the robotics world don't necessarily think about these kinds of issues. They are more concerned, and perhaps rightfully so, in building the right kind of algorithm,” Wagner said. “Right on the heels of it, when you're ready to deploy it, safety engineering says you need to test the robustness of it. You have to test the fault tolerance of it.”

Major companies working on autonomous cars have said they do pay attention to the integrity of their software. General Motors acquired Cruise Automation, a San Francisco-based autonomous vehicle technology company to help it develop the software inside the self-driving Chevy Bolt, Harry Lightsey, GM's executive director of public policy on emerging technologies, told the Tribune-Review. GM announced in December it immediately would begin testing the autonomous Bolts on Michigan roads and begin production of the cars in early 2017.

The car company has 40 test vehicles on roads every day, Lightsey said.

“We're running the software through simulations. We're running it on the road, trying to present it with as many scenarios as we possibly can to make sure that all the glitches are exposed and fixed,” Lightsey said. “And they are making corrections, and the system is learning itself. The system that you take out on Day 2 is not the system you took out on Day 1.”

Uber hired people who worked on autopilot software and people familiar with the risk and safety concerns of space travel, Raffi Krikorian, software director at Uber's Advanced Technology Center in Pittsburgh, told the Tribune-Review in November.

Ford, which aims to have a fleet of autonomous cars for ride-sharing on the road by 2021, also is paying close attention to software development and testing as it develops self-driving vehicles, a company spokesman said.

“We focus on the security of our customers before the introduction of any new technology feature by instituting policies, procedures and safeguards to help ensure their protection,” Alan Hall wrote to the Tribune-Review in an email.

Wagner and Koopman, however, weren't comforted by internal testing by major auto manufacturers. They said past bad software in cars points toward the need for an external review.

Aaron Aupperlee is a Tribune-Review staff writer. Reach him at aaupperlee@tribweb.com or 412-336-8448.

TribLIVE commenting policy

You are solely responsible for your comments and by using TribLive.com you agree to our Terms of Service.

We moderate comments. Our goal is to provide substantive commentary for a general readership. By screening submissions, we provide a space where readers can share intelligent and informed commentary that enhances the quality of our news and information.

While most comments will be posted if they are on-topic and not abusive, moderating decisions are subjective. We will make them as carefully and consistently as we can. Because of the volume of reader comments, we cannot review individual moderation decisions with readers.

We value thoughtful comments representing a range of views that make their point quickly and politely. We make an effort to protect discussions from repeated comments either by the same reader or different readers

We follow the same standards for taste as the daily newspaper. A few things we won't tolerate: personal attacks, obscenity, vulgarity, profanity (including expletives and letters followed by dashes), commercial promotion, impersonations, incoherence, proselytizing and SHOUTING. Don't include URLs to Web sites.

We do not edit comments. They are either approved or deleted. We reserve the right to edit a comment that is quoted or excerpted in an article. In this case, we may fix spelling and punctuation.

We welcome strong opinions and criticism of our work, but we don't want comments to become bogged down with discussions of our policies and we will moderate accordingly.

We appreciate it when readers and people quoted in articles or blog posts point out errors of fact or emphasis and will investigate all assertions. But these suggestions should be sent via e-mail. To avoid distracting other readers, we won't publish comments that suggest a correction. Instead, corrections will be made in a blog post or in an article.

click me