Fostering Responsible Tech Use: Balancing the Benefits and Risks Among Public Child Welfare Agencies

by Maddy Dwyer, Policy Analyst, Equity in Civic Technology

Across the country, child welfare agencies work with over 390,000 youth in foster care each year by temporarily placing them in foster homes, facilitating adoption if parental rights are terminated, and managing their cases (Ford & Hetro, n.d.; Children’s Bureau, 2023). These agencies are tasked with the high-stakes responsibility of ensuring the safety and well-being of youth in their care, but face many challenges, such as lack of coordination across agencies that work with foster youth, insufficient or biased data about a child’s environment, and heavy administrative burdens that contribute to high rates of social worker turnover.

To address these issues, child welfare agencies are using, or considering, data and technology systems including artificial intelligence (AI) tools. However, despite the promises that data and technology provide, these systems risk entrenching racial and socioeconomic disparities, stigmatizing foster youth based on social and academic achievement, and compromising the privacy and security of personal data. This article, based on the Center for Democracy & Technology’s (CDT) detailed report, highlights the ways that data and technology can mitigate some of the problems that child welfare agencies face, while also recognizing their inherent risks.

How Data and Technology, Including AI, Can Help Child Welfare Agencies Better Serve Foster Youth

There are a number of stated goals in incorporating data and technology, including AI, into the foster care system, namely:

Data sharing and portability can lead to better coordinated care. Inter-agency data sharing and portability can support youth in foster care who have complex physical and psychological health issues (American Academy of Pediatrics, 2021; Chuang & Wells, 2010). Improved coordination allows caseworkers and foster homes to have access to necessary information, even as the children they work with/care for change locations. One example of a solution to this issue is automated, secure data sharing between state child welfare agencies and Medicaid, which covers over 99 percent of foster youth (Children’s Defense Fund, n.d.).

Data sharing can support timely school enrollment and appropriate class placement. Foster youth switch schools more frequently than other children. Due to their high mobility, youth in foster care may experience changes in academic expectations, like differences in graduation requirements or course offerings, along with incomplete or delayed transfer of records that result in late enrollment or incorrect course placement (Barrett & Berliner, 2013; Laird & Quay de-la Vallee, 2019). Robust, secure data sharing between child welfare agencies and state education agencies can enable better communication and ensure that foster youth are receiving the educational support they need (The Data Quality Campaign, 2017). With this shared knowledge, child welfare and state education agencies can work together to facilitate proper class placement and enrollment based on a foster child’s specific needs.

Effective technology and data use can reduce caseworker burden. Technology such as chatbots and robotic process automation (RPA) is being touted as assisting with easing some of caseworkers’ workload. Just as other public agencies have begun to leverage chatbots with generative AI capabilities, child welfare agencies could adapt chatbots to connect foster care families to proper resources faster than a caseworker might be able to (Desouza & Krishnamurthy, 2017). For example, a foster parent who is curious about what financial resources their state or locality might be able to offer them can ask a chatbot, which can provide them with links to benefits programs they may be eligible for. RPA can also potentially assist with time consuming data entry so that caseworkers can spend time on more productive tasks, such as interfacing with families (Wroblewska, et al., 2023). An RPA can, for example, trigger an alert to a caseworker that they need to schedule a check in with a specific family or can notify them that a foster child was truant from a class. RPA may also perform tasks such as triggering timely notification for foster homes, which can include when new support programs become available (Northwoods, 2023).

Assist and expedite in caseworker decision making. In addition to reducing administrative burden, emerging uses of data and technology claim to assist and expedite caseworker decision making through technology like predictive risk modeling (PRM). PRM is a form of data analysis that purports to use historical data to understand relationships between many factors to estimate a level of risk for a child. Both the factors that are considered and the definition of risk are determined by those that develop the model (Whicher, et al., 2022). For example, PRMs can assign risk levels, which can be used in conjunction with the caseworker’s knowledge of the case, helping to make more informed decisions about which cases need to be prioritized. This technology may also prevent children whose families otherwise might not have been investigated from “slipping through the cracks” (Hurley, 2018). As discussed in more detail below, the complexity of PRMs and their potential to affect crucial decisions means that risk assessment and mitigation is particularly important.

Irresponsible Data and Technology Use Can Harm Foster Youth

Unfortunately, the stated benefits of data and technology are not always realized and can result in more harm than good, such as:

A hand stopping the fall of dominos in the middle

Lack of access controls and improper disclosures can lead to stigmatization of foster youth and create safety and well-being concerns. Youth in foster care can suffer significant emotional, physical, and general well-being harms if their sensitive personal information is exposed, especially within the school context. Thus, it is important to limit access to individuals who need it to provide services to foster youth, and for those individuals to not disclose it to unauthorized third parties (Laird & Quay de-la Vallee, 2019; U.S. Department of Health and Human Services Administration for Children and Families, 2022; Foster Love, 2022).

Data and algorithmic bias. A pervasive, well documented issue within the child welfare system is that members of historically marginalized communities, specifically Black families, who come into contact with the system face disparate treatment. In Illinois in 2007, for instance, African Americans made up 19 percent of the state’s population but comprised 59 percent of foster youth population and 34 percent of subjects of reports to protective services for maltreatment (Horton & Watson, 2015). This overrepresentation of Black children and families in investigations for maltreatment and subsequent placement in the child welfare system may be attributable, in part, to biased decision making.

Efforts to use data must account for biases embedded in that data, which is even more important if it is incorporated into algorithmic decision systems. In this case, algorithmic bias – the tendency of algorithms to make decisions that systematically disadvantage certain groups – may occur because “pre-existing societal prejudices are baked into the data itself” (Friis & Riley, 2023).

PRMs, RPA, or other AI tools trained using biased case data risk causing biased decision making and exacerbating racial or socioeconomic disparities (Gawronski, 2019). Because Black, Latinx, and Native American families and children are overrepresented in the child welfare system, it is possible that PRMs in particular may inadvertently further entrench existing disparities (Whicher, et al., 2022). Additionally, “government administrative data include more information on certain racial or ethnic groups compared to others because those groups are more likely to be involved in government programs,” potentially exposing those groups to further algorithmic scrutiny (Whicher, et al., 2022), and failing to accurately identify needs in other communities. For example, a study found that use of the Allegheny Family Screening Tool in Pennsylvania was on its own “more racially disparate than workers, both in terms of screen-in rate and accuracy” (Stapleton, et al., 2022). A recent ACLU report similarly found that the Allegheny tool perpetuates racial and disability bias due to “arbitrary” algorithmic design choices (Gerchick, et al., 2023).

Over-reliance on AI. AI tools inherently lack the human judgment that experienced caseworkers possess to make decisions about foster youth cases (Whicher, et al., 2022). Over-reliance on PRMs and other AI tools may result in children being removed from homes where they are not actually at risk and when their situations might be improved by different forms of support.

Redirecting resources to unproven technology. Not all child welfare agencies will benefit from spending resources on data and technology, particularly when a product’s efficacy is unproven (Ho & Burke, 2023). Data and technology tools that lack independent evidence that they work as intended can actually create more work for child welfare agencies.

Cybersecurity and transparency risks. Intra- and inter-agency data sharing and technology use can increase the risk of data breaches (Wroblewska, et al., 2023). Depending on how these data and technology systems are set up, unauthorized people could have access to case data, putting foster children’s privacy at further risk. As a public serving entity, child welfare agencies may also risk public backlash if they fail to disclose their use of personal data (Whicher, et al., 2022).

As discussed in “Fostering Responsible Tech Use: Recommendations for Public Child Welfare Agencies,” located in the practice section of this publication, there are critical steps that public administrators should take to mitigate the potential harms of data and technology in the foster care system to have a chance at realizing their benefits.

Maddy Dwyer is a policy analyst on the Equity in Civic Technology project at the Center for Democracy & Technology where she focuses on the responsible use of data and technology by government agencies. She is the author of Report - Fostering Responsible Tech Use: Balancing the Benefits and Risks Among Public Child Welfare Agencies