OpenAI actively lobbied the EU, shaping AI legislation to balance benefits and risks, with some proposed changes incorporated into the final AI Act.
OpenAI has been actively engaged in lobbying the European Union to influence the incoming AI legislation. Recent documents obtained by Time from the European Commission reveal that OpenAI, the creator of ChatGPT, made specific requests for amendments to a draft version of the EU AI Act. This upcoming law aims to enhance the regulation of artificial intelligence usage. OpenAI’s lobbying efforts sought to ensure that the legislation effectively balances the benefits and potential risks of AI technology.
The proposed amendments put forward by OpenAI were aimed at addressing certain aspects of the EU AI Act, with the intention of refining and improving its provisions. These suggestions were made prior to the European Parliament’s approval of the legislation on June 14th. Notably, some of OpenAI’s proposed changes were incorporated into the definitive version of the AI Act.
Now, let’s delve into OpenAI’s input on the European Union’s Artificial Intelligence Act and examine the company’s stance on various key aspects of the legislation.
OpenAI Lobbying efforts
OpenAI has made its voice heard in the ongoing discussions surrounding the European Union’s Artificial Intelligence Act (AIA). OpenAI has been actively engaged in lobbying efforts, urging the European Union to consider certain amendments to the draft version of the EU AI Act before its approval by the European Parliament on June 14th. These proposed changes, aimed at ensuring responsible and beneficial use of artificial intelligence, were eventually incorporated into the legislation.
OpenAI, with its mission to develop and deploy artificial general intelligence (AGI) for the greater good of humanity, holds a prominent position in the AI landscape. The company’s foundational charter emphasizes the importance of safe and beneficial AGI, and it invests not only in core AI research and development but also in policy research, risk analysis, and technical infrastructure to maximize the safe use of its AI technologies.
General purpose AI systems
The European Union’s Artificial Intelligence Act (AIA) seeks to enhance public trust in AI tools and establish comprehensive regulations governing their use. OpenAI shares the EU’s objective of building trust and believes that the AIA will play a pivotal role in achieving this goal. Many of the themes and requirements outlined in the AIA align with the practices and mechanisms already employed by OpenAI to strike a balance between technological progress and the safe and beneficial use of AI.
One key aspect of the AIA under discussion is the treatment of general purpose AI systems. OpenAI’s GPT-3 language model, for instance, is a prime example of such a system, capable of performing a wide array of language-related tasks. While GPT-3 itself may not be considered a high-risk system, its capabilities can potentially be employed in high-risk use cases. OpenAI has implemented strict guidelines, best practices, and limitations to ensure responsible use of its services. The company outlines specific “high stakes applications” that require additional scrutiny to identify and manage potential risks. By releasing products with baseline capabilities and stringent restrictions, OpenAI ensures an iterative deployment process that allows for continuous learning and improvement based on user feedback.
OpenAI proposes a reframing of the language
However, OpenAI expresses concerns that the proposed language in the AIA may inadvertently encompass all general-purpose AI systems by default. The current exemption clause suggests that providers of general-purpose AI systems will be exempt from certain regulations if they explicitly exclude high-risk uses in their instructions and accompanying information. Nevertheless, OpenAI believes that the subsequent clause, which states that such exclusion will not be justified if the provider has sufficient reasons to believe that the system may be misused, might unintentionally discourage providers from actively addressing and mitigating risks. OpenAI proposes a reframing of the language to incentivize providers to consider and address potential misuse actively.
Generative AI systems, another focus of the AIA, are AI systems capable of producing text, audio, or video content that closely resembles human-generated content. OpenAI acknowledges the importance of transparency in disclosing artificially generated or manipulated content. However, instead of adding separate requirements under Annex III, OpenAI suggests aligning the transparency obligations within the existing framework of the AIA. The company has already implemented mechanisms to verify the synthetic origin of images generated by its DALL·E system and constantly updates its Content Policy to address concerns surrounding deepfakes and artificially generated content.
OpenAI iterative deployment model
The AIA also addresses the requirement for new conformity assessments when substantial modifications are made to AI systems. OpenAI raises concerns that this requirement may hinder innovation and delay the implementation of safety improvements. The company proposes excluding modifications made for safety or risk mitigation purposes from the scope of substantial modifications, as long as they do not pose a negative impact on health, safety, or fundamental rights. OpenAI’s iterative deployment model allows for continuous reassessment of features and risk levels, enabling the implementation of safety and security changes in a timely manner.
Furthermore, OpenAI highlights the need for clarity regarding the scope of certain high-risk use cases listed in Annex III of the AIA. While acknowledging the importance of addressing high-risk scenarios, OpenAI believes that low-risk applications within sectors like education and employment should not be unintentionally restricted. The company suggests refining the language to focus on use cases with material impacts on individuals’ opportunities while allowing for low-risk applications that support human decision-making processes.
OpenAI appreciates the opportunity to contribute to the ongoing discussions surrounding the European Union’s Artificial Intelligence Act. The company shares the EU’s commitment to the responsible and beneficial use of AI technologies. OpenAI believes that thoughtful regulation and policy approaches are essential in ensuring that powerful AI tools benefit society at large. With a dedication to safety and ethical considerations, OpenAI stands ready to assist and advise in any way necessary to promote the safe and ethical development of AI.
Fede Nolasco, author of this blog article, encourages readers to join the conversation and provide their perspectives on the European Union’s Artificial Intelligence Act. If you have any questions or would like to contribute further, please feel free to reach out. The future of AI depends on collaborative efforts and responsible stewardship.
“Progress in AI research and deployment should be aligned with the best interests of humanity.” – Fede Nolasco
- World Economic Forum: EU Artificial Intelligence Act Explained
- OpenAI’s Charter: Ensuring Long-Term Safety
- Open AIA White Paper FINAL 09232022
- Microsoft guide to responsible AI governance – Datatunnel
- Proposes EU Artificial Intelligence ACT
- Initial Appraisal of a European Commission Impact Assessment