AI in social work

28th April 2026

Human in the loop

Stef Lunn, social work practice lead, Civica

A strong partnership tackling the AI challenges of narrative data and bias.

Artificial intelligence in social work is increasingly diverse. It helps me to consider AI in three broad categories: narrow AI, generative AI and predictive AI.

Narrow AI deals with low-skilled, repetitive and burdensome tasks. These are the activities that social workers expect their technology to do for them because they eat up time without adding value. Workers are keen to shed scheduling, duplication across systems and other mundane tasks, so there’s little controversy or push-back there.

At the other end of the scale, there is predictive AI. Here large data sets are used to predict future likely outcomes to guide potential behaviour and target interventions on the basis of actuarial models. This is the least commonly deployed and least tested use of AI in UK care.

Nestled between these two extremes is generative AI. This is heavily employed in many local authorities and generates strong feelings in the social work community. When we’re using AI to generate case reports and provide recommendations, we’re moving into territory where the human interaction that is so intrinsic to the helping professions may be displaced.

Yes, we want AI to do the heavy lifting of searching, reading and collating information, but that must not come at to cost of professional, human case analysis and decision making. This is perhaps more pertinent in social care than in any other discipline or industry; first and foremost, we are about people.

Let’s take a key challenge in social work to explore how AI might be employed both ethically and effectively in addressing it. Social care records are highly narrative. Whilst there are formal assessments, plans and reports where significant information is recorded in a structured fashion, a lot of information in case notes also comes from informal human interaction. It will document interactions like phone calls, conversations and agreed actions. Without a crystal ball to know whether a conversation will become pivotal in the future or not, it’s important to retain a thorough record. The result is an exponential paper trail of mostly unstructured data.

At a later date, if a new worker is assigned or the nature of the work changes, working through the data to find the key details can be like looking for a needle in a haystack. This is where the power of AI could be invaluable to the social worker searching for evidence of, for example, domestic abuse in a person’s record. This exponentially increases the social worker’s efficiency in understanding the information already held on file.

This extra power comes with a drawback, however. It is well documented that AI can be biased. In the same way as humans struggle to think outside the paradigm of the culture that they live in, AI inherits bias from its training and source material. Social workers offer the ideal antidote to this concern, as they are trained to identify and challenge discrimination, using anti-oppressive practice techniques.

In the current discourse on AI in social work, the importance of the professional ‘human in the loop’ is universally recognised and must remain a key safeguard in the deployment of AI.

How we ensure this in practice with sufficient safeguards is a complex question, but this partnership will ensure that we can both ‘find the needle’ and then determine its meaning in the real world.

A registered social worker, Stef is an expert in digital transformation for social care. She sits on the 80:20 steering group and special interest AI group for the British Association of Social Workers and has also served as a specialist advisor to the Care Quality Commission for Adult Social Care.