Algorithmic Decision Making and Law and Child Welfare

by Matthew Trail, Research Fellow at the Max Planck Institute for Research on Collective Goods

Algorithmic predictive models are increasingly being seen in the child welfare field. These models might help child welfare professionals determine if a child is in danger (Vaithianathan, et al., 2017), if a child will achieve permanency (Stepura, et al., 2021) or even if a family might be successfully reunified (Purdy & Glass, 2020). The goal for all of these models is to improve human decision making and ultimately improve child welfare outcomes.

However, critics of the models note that they are built using old data that contains errors along with human and racial biases that does not take into account the family in their current setting (Gerchick, et al, 2023). The critics also point out that the models predict behavior based on aggregate averages and they cannot actually know what individual parents and families will do (Keddell, 2019).

It is unclear how many jurisdictions are using predictive models currently, but an ACLU report from 2021 estimated that more than half of the states had tried some form of model, though even then, the ACLU estimated that they were undercounting the actual total (Samant, et al., 2021). When I practiced in Texas, I encountered algorithmic models without at the time even realizing that the decisions about my client’s service levels were coming from a machine.

What is clear, is that the models have been primarily used internally by CPS staff for CPS related decision making and that lawyers and judges have not been involved in their use, nor does their use generally make it into court hearings. In some cases, legal professionals and the court are excluded intentionally (Allegheny County Department of Human Services, 2018).

Because judges and attorneys are essential to decision making in the U.S. child welfare system, I set out to test if a predictive risk model could change legal opinions regarding removal and placement. Using a vignette survey of child welfare and juvenile justice attorneys across the country, I found that high and medium risk scores could make lawyers change their minds and favor removal and foster care placement. Conversely, low risk scores could sway lawyers the other way to favor keeping the child with the biological parents (Trail, 2024). Though this effect was not large, it is consistent with findings from other researchers showing that humans can be persuaded with advice from machines (Grgić-Hlaća, et al., 2022) and with the specific research regarding how risk scores affect CPS decision making (Fitzpatrick & Wildman, 2021).

Because judges and attorneys are essential to decision making in the U.S. child welfare system, I set out to test if a predictive risk model could change legal opinions regarding removal and placement.

Unfortunately, the law is still playing catch up to the technology, so there are not uniform policies to guide child welfare attorneys and courts. However, the National Center for State Courts (NCSC) (2024) reports that multiple states have recently begun to enact legislation, to promulgate court rules and formulate codes of conduct about the use of artificial intelligence in legal practice. Most of these new efforts are focused on generative AI, such as large language models (LLM) and are not specific to dependency proceedings (NCSC, 2024). Still some researchers and CPS agencies are already examining ways in which LLMs might be best used in child welfare work (Field, et al., 2023).

What this means for dependency attorneys and courts is that the law and usage of models and generative AI is unsettled and rules will vary greatly between jurisdictions. Certainly though, attorneys have some ethical duties regarding the use of AI and predictive models. Recently, the American Bar Association (2024) released its first formal guidance for the use of generative AI and included duties of competence. Essentially lawyers must be technologically savvy enough to understand what the new technology does and what its limitations are. This is not the same as requiring attorneys to understand the inner workings of the algorithm itself, but does require some affirmative duty to learn how to use AI appropriately. The AI Rapid Response Team (2024) at NCSC had similar advice for courts, noting that judge’s ethical duties also required them to stay current with technological advances that might impact the court. While both of these guidance papers were focused on generative AI, it seems reasonable that a similar duty would also apply to predictive models.

For child welfare attorneys and judges, this first means learning if their local CPS agency is using generative AI or predictive models and for what reasons. Lawyers need to know what program or model is being used. Was it built for that purpose? Who built it? Who at the agency actually understands how it works? The reality is that these are mostly likely questions for state offices, so asking the local caseworkers is probably not sufficient.

Ultimately, AI and predictive models have a lot of potential in child welfare, but the legal field has its own duty to challenge the use of untested technology that may have unintended negative impacts toward their clients. This technology should not become a substitute for human judgment and good case work.

Matthew Trail

Matthew Trail, JD, is a research fellow with the Max Planck Center for Research on Collective Goods and a former child welfare attorney. Contact: [email protected]