UK Introduces Data Reform Bill and Propose AI Regulation Approach

On July 18, 2022, the United Kingdom introduced a number of data reform initiatives aimed at promoting innovation and regulating the use of artificial intelligence.

The Data Protection and Digital Information Bill (“DPDI”), which contains measures to “use AI responsibly while reducing compliance burdens on businesses to boost the economy,” is currently facing delays and a new public consultation, but, if enacted, would amend the current rules on data protection and privacy, including AI. DPDI, as presented, clarifies the conditions under which organizations may utilize automated decision-making. If a decision has a legal or similarly significant effect on an individual and involves the processing of sensitive “special category” data, it cannot be made solely on an “automated decision basis” with no “meaningful” human involvement, except in very limited circumstances. Otherwise, automated decision-making systems may be used, subject to safeguards designed to “protect the individual’s rights and freedoms.” These safeguards include requirements that the organization deploying the automated decision-making system provide information about the decisions and provide individuals about whom a decision is being made a chance to make representations about the decision, escalate to human intervention, and contest any decisions.

AI city of London by datatunnel

Alongside the new legislation, the government released a series of policy initiatives outlining its approach to regulating AI in the United Kingdom, reiterating its commitment to sector-specific regulation and a “less centralized approach than the EU.” Its “AI Action Plan” highlights the UK government’s “focus on supporting growth and avoiding unnecessary barriers being placed on businesses,” emphasizing that the proposal will “allow different regulators to take a tailored approach to the use of AI in a range of settings . . . [which] better reflects the growing use of AI in a range of sectors.” The guidance outlines six fundamental principles that developers and users must adhere to:

(1)          ensure that AI is used safely;

(2)          ensure that AI is technically secure and functions as designed;

(3)          make sure that AI is appropriately transparent and explainable;

(4)          consider fairness;

(5)          identify a legal person to be responsible for AI;

(6)          clarify routes to redress or contestability.

Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Products Regulatory Agency will be tasked with interpreting and implementing the principles, and they will be encouraged to consider “lighter touch options” such as guidance and voluntary measures or the creation of sandboxes.

Available Resources

UK Data Protection and Digital Information Bill

Spencer, M. (2022, September 5). Business Statement [Hansard]. (Vol. 719)

UK, Press Release, UK sets out proposals for new AI rulebook to unleash innovation and boost public trust in the technology.

Similar Posts