The EU AI Act protects fundamental human rights – EP0003

Data Politics
Data Politics
The EU AI Act protects fundamental human rights - EP0003
/

The EU Data privacy regulation of 2018 is making its way across Europe and other regions of the world such as America and Asia which are adopting similar data privacy laws. The adoption of data privacy regulations across nations have already resulted in large volumes of data breaches where nationwide regulators have imposed fines for companies violating data privacy rights of citizens. You can Google today about data breaches and you find the latest news articles across the world.

More recently, the proposal of the EU AI Act was published in April 2021. This is the regulation on Artificial Intelligence across 27 European Member states. The AI legislation is intended to protect EU citizens’ fundamental rights. This also includes restricting the use of AI products or services from providers outside of Europe to organizations or individuals in the Euro-zone.

The European Parliament has filed thousands of amendments back in June 2022 and legislative process will be wrapped up around June 2023. Changes to the EU Act proposal have been proposed by human rights groups as well as global companies like Microsoft and Google.

So, before the proposed AI regulation becomes law, we have a few years of heated negotiations ahead of us.

Critics claim that the AI law will fall short of its goal of fostering the development of “trustworthy” AI unless significant changes are made to the regulation.

The AI regulation takes a risk-based strategy. Based on the perceived risk, AI systems are divided into four categories.

AI with unacceptable risk

This risk group comprises the AI systems that the European Commission deems to be the most offensive and those it wants to outright outlaw. The proposed regulation specifically forbids the use of systems that take advantage of a person’s vulnerability owing to age, physical incapacity, or mental illness, “real-time” remote biometric identification systems in public areas, and systems that permit “social grading” by governments. The Commission used the example of a voice-activated toy that might encourage risky behaviour in young children.

AI with high risk

The proposed regulation stresses artificial intelligence (AI) applications in critical infrastructure, law enforcement, and employment settings, as well as those that are employed as safety components of products (e.g., AI used to select CVs for recruitment procedures).

Before they may put these kinds of systems on the market, suppliers, importers, distributors, and professional users of high-risk AI systems will be required to fulfil several duties, including the need to guarantee:

  • Adequate risk evaluations.
  • High-quality datasets to, for instance, reduce discriminatory consequences.
  • Suitable human oversight procedures to reduce risk.
  • High standards for accuracy, security, and robustness.
  • Register the AI product in an EU-wide database.
  • Report major occurrences or malfunctions to national competent authorities.

AI with limited risk

This includes AI that “mimics existent humans, objects, places,” as well as “AI systems intended to interact with natural persons” (like chatbots) (i.e., deep fakes). To ensure that users are informed that they are engaging with a machine or that the content has been generated or altered artificially, specific transparency responsibilities will be imposed.

AI with minimal risk

The proposed Regulation outlines innovation-supporting policies meant to encourage the unrestricted use of AI systems that pose “little or no danger” to the rights or safety of individuals. The Commission is thinking of systems like spam filters or video games with AI. The methods suggested including “sandboxes,” or controlled environments, where AI systems can be created, tested, and certified before being sold or put into use. Personal data processing will be allowed within these sandboxes for the development of AI systems that benefit society. EU Member States are specifically required to make sure that start-ups and small-scale providers have access to these “sandboxes,” are supported with the necessary guidelines, and that costs are reasonable.

Why do we need AI regulation?

Our current system for assigning blame and compensating victims of harm is totally unprepared for AI. Liability laws were created in an era when most errors and damage were committed by people. Therefore, most liability frameworks impose penalties on the injured party’s end-user physician, driver, or other human offender. But with AI, mistakes might happen completely independently of human input. Accordingly, the liability system needs to be adjusted. Patients, customers, and AI developers will all be harmed by poor liability policies.

Unlocking the potential of AI depends on getting the liability landscape right. Investment in, development of, and deployment of AI systems will be discouraged by unclear regulations and potentially expensive litigation. The framework that determines who, if anyone, ends up accountable for a damage caused by artificial intelligence systems will determine how widely AI is adopted in the health care industry, autonomous automobiles, and other businesses.

Harmful cases

We review press references from harmful cases where it becomes clear that without AI regulation or a revised liability system, AI may tragically affect humans.

IBorderCtrl

At the borders of Greece, Hungary, and Latvia in 2019, an AI lie detector was deployed to monitor facial expression, voice tonality, and eye movement.

Controversy was raised by the trial. Psychologists have generally pronounced those polygraphs and other technologies designed to identify falsehoods based on bodily characteristics are unreliable.

The lie-prediction algorithm’s failure to operate was reported in the media, and the project’s website admitted that the technology “may imply implications for fundamental human rights.”

Child benefit fraud detection

It was discovered that an algorithm used by the Dutch tax office to look for possible child benefit fraud between 2013 and 2020 had injured tens of thousands of people and resulted in the placement of more than 1,000 children in foster care. The defective system, which disproportionately affected immigrants, used information like whether a person held a second nationality as a trigger for investigation.

Tesla – self-driving car

Two individuals were killed in a collision in Gardena, California, in December 2019 by a driver of a Tesla equipped with an artificial intelligence driving system. The Tesla driver might spend many years behind bars.

Self-driving Uber vehicle

A self-driving Uber car killed a pedestrian in 2018. The driver took the blame, but in fact the AI system missed the pedestrian.

AI mental health chatbot

A mental health chatbot powered by AI recently incited a fictitious suicidal patient to commit suicide.

Discrimination female candidates

The resumes of female applicants have been discriminated against by AI systems.

AI misidentification of suspect

A severe assault suspect was incorrectly recognized by an AI algorithm, which resulted in an erroneous arrest.

Conclusions

These are some harmful use cases that require AI legislation to protect citizens’ fundamental rights and AI legislation will also lead to a revision of our liability systems. There is no question that AI benefits businesses and society in numerous ways. It has the potential to be extremely beneficial to humanity in many ways, but it also has the capacity to be extremely harmful. Great power entails enormous responsibility.