An artificial intelligence system used by the UK government to detect welfare fraud has come under scrutiny for exhibiting bias based on individuals’ age, disability, marital status, and nationality. The Guardian has reported that internal assessments reveal the machine-learning program guiding investigations mistakenly selects certain demographics more frequently than others for fraud inquiries regarding universal credit payments.
According to documents disclosed under the Freedom of Information Act by the Department for Work and Pensions (DWP), a “fairness analysis” conducted in February highlighted a “statistically significant outcome disparity” emerging from the use of this automated system for managing claims. This analysis stands in stark contrast to the DWP’s earlier assurances, which claimed there were “no immediate concerns of discrimination” surrounding the AI’s decisions.
The DWP justified the system’s continuing use by emphasizing that human caseworkers ultimately make the final decisions about welfare payments. They argue that their approach is designed to combat an estimated £8 billion in annual fraud and error within the benefits system, dubbing it “reasonable and proportionate”. Nonetheless, it is concerning that a fairness analysis regarding other critical aspects—such as potential biases related to race, sexual orientation, and gender reassignment status—has yet to be conducted.
In response to these findings, campaigners have condemned the government’s approach, labeling it a “hurt first, fix later” policy. Caroline Selman, a senior research fellow at the Public Law Project, criticized the DWP for not adequately assessing the potential unfair targeting of marginalized communities by their automated processes. “DWP must put an end to this ‘hurt first, fix later’ approach,” she remarked, urging for more transparency in identifying which groups might be unjustly flagged as fraudulent.
The recognition of these disparities has sparked heightened scrutiny over the government’s increasing reliance on AI technologies. An independent assessment has identified at least 55 automated tools in use across various public authorities in the UK, which could potentially impact millions, in stark contrast to the mere nine systems officially cataloged by the government. The recent revelation that no Whitehall department has registered any AI system since such registration was mandated raises further questions about oversight.
Government officials, including Peter Kyle, the secretary of state for science and technology, have acknowledged a lack of transparency in the use of algorithms in public services. Yet, they often hesitate to disclose details about these systems, citing fears that exposing their operational mechanics could lead to fraudsters exploiting loopholes.
The DWP has not disclosed specifics regarding which age demographics or disabled individuals are more likely to be unfairly targeted by the algorithm. They have redacted critical data from the fairness analysis to prevent possible manipulation of the system.
A DWP spokesperson reiterated that their AI tool does not replace human judgment, promoting a narrative of vigilance and proactive fraud detection through their ongoing fraud and error bill, asserting that it aims to enhance the efficiency of investigations against identified criminals in the benefits system.