Editorial: Can algorithms help or hurt child protective services?
An algorithm is a process that uses math or computers to work through a problem to find a solution.
They have become more and more a part of our lives as computers run everything around us. Algorithms are behind traffic lights, facial recognition on phones and ads that pop up on Facebook that are eerily accurate to that thing you were just talking about with your best friend.
They are artificial intelligence and predictive analysis, marketing and management. They are integral to getting all kinds of jobs done.
But are they integral to every job? Aren’t there a few things that don’t need to operate by equation?
A recent Associated Press story looked at how Allegheny County is using algorithms in child protective services. It isn’t the only place using computers to augment things like health care and protective services. Carnegie Mellon helped the state track covid-19. Harvard’s Teamcore group is looking at ways to use artificial intelligence for more public service goals, including delivering services and determining risks in homeless youth with social media.
But there are risks to removing the humanity from human services.
The AP story used Carnegie Mellon research to highlight a pattern of Allegheny County’s algorithm pointing to Black kids for mandatory neglect investigations compared to white kids. Social workers disagreed with the algorithm one-third of the time.
It isn’t that social workers are perfect. They aren’t. They miss things, and they do that because they are human beings and people make mistakes.
But algorithms work within a realm of numbers and percentages. They acknowledge that things will be missed and others will be misidentified, but if there is a high enough chance of accuracy, well, that’s OK.
When calculating the odds of someone being pregnant because they bought a stroller, there is no downside if it turns out the stroller was just a baby shower present.
But if Black families are inaccurately seen as more likely to neglect their children, that can have consequences.
For the families, it could mean unnecessary stress on parents and kids. It also might mean risks for white kids aren’t being picked up. For the agencies, it could mean additional workloads, as real people have to follow up on the flagged cases.
“Workers, whoever they are, shouldn’t be asked to make, in a given year … 16,000 of these kinds of decisions with incredibly imperfect information,” said Erin Dalton, director of the Department of Human Services in Allegheny County. She called the Carnegie Mellon report a “hypothetical scenario that is so removed from the manner in which this tool has been implemented to support our workforce.”
That is fair. But Carnegie Mellon — ranked No. 1 in the country by U.S. News & World Report for artificial intelligence — knows a little something about algorithms and shouldn’t have its research casually dismissed as a hypothesis.
The fact is an algorithm isn’t simply an alternative to people — a way to avoid human mistakes. It is just one step removed from them. Algorithms are designed by people and dispassionately execute their commands. They may be faster and take some of the load off the human workers, but they still can be subject to the assumptions and perceptions of their flesh-and-blood designers.
The issue is a perpetual problem with the field of protective services. In the name of privacy and protection of children, child welfare agencies are opaque. There is no way to do the same kind of independent checking on how the use of algorithms plays out the way you could with something like spending by a state agency.
Because the end result has so little transparency, it is all the more important that the data available be taken seriously.
Remove the ads from your TribLIVE reading experience but still support the journalists who create the content with TribLIVE Ad-Free.