As of now, not a single Whitehall department has fulfilled the requirement to register its use of artificial intelligence (AI) systems, which has raised alarm about transparency in the public sector. These findings emerge as the government has implemented this registration as a mandate for better oversight of algorithmic technologies that have profound implications for millions of people’s lives.
AI is currently integrated into various governmental decision-making processes, influencing areas such as benefit distribution and immigration enforcement. Despite contracts for AI and algorithm services being awarded extensively—including a notable £20 million contract for facial recognition technology—only nine algorithmic systems are recorded on the public register. Crucially, this register notably lacks submissions regarding the welfare system, the Home Office, or police use.
The government’s push for accountability started in February this year when it announced that all departments would be required to maintain an AI register. However, experts and advocates have voiced concerns that a failure to adopt AI transparently can lead to significant social harms, as evidenced by past incidents like the problematic Post Office Horizon IT system.
Peter Kyle, the secretary of state for science and technology, acknowledged during an interview that the public sector has generally not recognized the imperative for transparency regarding algorithmic applications. He articulated, “I accept that if the government is using algorithms on behalf of the public, the public have a right to know.” This statement underscores the necessity for a transparent approach as algorithms increasingly dictate governmental operations.
The privacy rights organization Big Brother Watch criticized the emerging contract for facial recognition technology, interpreting it as yet another instance of governmental opacity concerning AI applications. Their chief advocacy officer, Madeleine Stone, expressed concern for public data rights, emphasizing that a secretive use of technology undermines trust and accountability.
The Ada Lovelace Institute issued warnings regarding the potential pitfalls of uncritically adopted AI systems, noting that even as they streamline administrative burdens, they risk damaging public trust and yielding discriminatory or ineffective results.
There have only been three algorithms recorded on the national register since late 2022. These include a Cabinet Office system for identifying valuable digital records, an AI camera for analyzing pedestrian crossings in Cambridge, and a system facilitating patient reviews of NHS services. However, recent data from Tussell demonstrates that since February, public entities have entered into 164 AI-related contracts.
Prominent tech companies, including Microsoft and Meta, are heavily promoting their AI systems across governmental departments. A report funded by Google Cloud posited that a broader application of generative AI could potentially liberate up to £38 billion across the public sector by 2030, a sentiment echoed by Kyle who acknowledged the revolutionary potential of generative AI.
The UK government is currently deploying various AI technologies in different agencies. For instance, the Department for Work and Pensions is leveraging generative AI to process over 20,000 documents daily for better decision-making. The Home Office has adopted an AI-driven immigration enforcement system, while several police forces use facial recognition technology to track down suspects.
Although the current landscape reveals potential benefits from AI integration in public services, the lack of transparency remains alarming. As the government advocates for greater use of AI technologies, it will be critical to ensure that the deployment is both ethical and accountable to the citizens it affects.