ShareThis Page
Technology

Google: Using your health records to predict whether you'll live or die

| Saturday, Feb. 3, 2018, 9:00 p.m.
An elite team of computer scientists and medical experts from Google and three major U.S. universities believe they've found the best way yet to predict outcomes for hospitalized patients. (Dreamstime/TNS)
An elite team of computer scientists and medical experts from Google and three major U.S. universities believe they've found the best way yet to predict outcomes for hospitalized patients. (Dreamstime/TNS)

Dr. Google may not have much of a bedside manner — she's an algorithm, after all — but if she says you're soon to be “expired,” she claims to be about 95 percent accurate, and you might want to start planning that last meal.

An elite team of computer scientists and medical experts from Google and three major U.S. universities believe they've found the best way yet to predict whether a hospitalized patient will end up leaving via the front doors or the loading dock at the morgue.

As might be expected from research led by Google, the software for accomplishing this task relies on artificial intelligence, which has become a key focus in virtually all areas of the Mountain View company's operations.

In a just-released paper, not peer reviewed, the researchers claim their AI-based software, based on AI known as “deep learning,” does a better job at predicting patient outcomes than other methods currently available.

“These models outperformed state-of-the-art traditional predictive models in all cases,” the paper said.

To make its predictions, the software uses medical-records data including patient demographics, previous diagnoses and procedures, lab results and vital signs.

Top of the list of outcomes predicted is “inpatient mortality,” in which the patient is reported as “expired.”

But the software goes beyond the question of life and death, to answer questions important to patients as well as hospital administrators and bean counters. Unplanned readmissions to the medical facility within 30 days are also covered, as well as probable length of stay and diagnoses, the latter of which is delineated using hospital billing codes.

The paper covered the study of some 216,000 hospitalizations involving about 114,000 patients — anonymous to the researchers — at two hospitals: UC San Francisco's and the University of Chicago's.

“Its biggest claim is the ability to predict patient deaths 24-48 hours before current methods, which could allow time for doctors to administer life-saving procedures,” according to online magazine Quartz, which spotted the paper published Jan. 26.

The software was able to predict death, at 24 hours after admission, with 93 to 95 percent accuracy, about 10 percent better than the traditional predictive method, according to the paper.

The researchers admitted to various limitations in their work, noting, for example, that it's not a “foregone conclusion” that accurate predictions can improve care.

Among the science stars on the 35-researcher team were Google senior fellow Jeff Dean, head of the AI-focused “Google Brain” project; Stanford Neurosciences Institute professor Nigam Shah; and Alvin Rajkomar, director of clinical data science at UCSF's Center for Digital Health Innovation.

TribLIVE commenting policy

You are solely responsible for your comments and by using TribLive.com you agree to our Terms of Service.

We moderate comments. Our goal is to provide substantive commentary for a general readership. By screening submissions, we provide a space where readers can share intelligent and informed commentary that enhances the quality of our news and information.

While most comments will be posted if they are on-topic and not abusive, moderating decisions are subjective. We will make them as carefully and consistently as we can. Because of the volume of reader comments, we cannot review individual moderation decisions with readers.

We value thoughtful comments representing a range of views that make their point quickly and politely. We make an effort to protect discussions from repeated comments either by the same reader or different readers

We follow the same standards for taste as the daily newspaper. A few things we won't tolerate: personal attacks, obscenity, vulgarity, profanity (including expletives and letters followed by dashes), commercial promotion, impersonations, incoherence, proselytizing and SHOUTING. Don't include URLs to Web sites.

We do not edit comments. They are either approved or deleted. We reserve the right to edit a comment that is quoted or excerpted in an article. In this case, we may fix spelling and punctuation.

We welcome strong opinions and criticism of our work, but we don't want comments to become bogged down with discussions of our policies and we will moderate accordingly.

We appreciate it when readers and people quoted in articles or blog posts point out errors of fact or emphasis and will investigate all assertions. But these suggestions should be sent via e-mail. To avoid distracting other readers, we won't publish comments that suggest a correction. Instead, corrections will be made in a blog post or in an article.

click me