Technology | The European Parliament have reached a political agreement on the Artificial Intelligence Act: What You Need to Know

The rising popularity of ChatGPT has led to a surge of interest in AI companies worldwide, with a consequential increase in scrutiny on the development and implementation of AI technology. As a result, the ongoing discussions regarding the Artificial Intelligence Act, the world’s first Artificial Intelligence regulation have become a significant topic of interest, with companies and individuals alike closely following the developments and potential implications of the proposed legislation. After many months of intense negotiations, Members of the European Parliament reached a provisional agreement on 27 April 2023 on the contents of the Artificial Intelligence Act.

What is the Artificial Intelligence Act?

In 2021 the European Commission (“Commission”) proposed the Artificial Intelligence Act (“AIA”) a regulation that aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. The AIA seeks to ensure that AI systems are developed and used in a way that is safe, transparent, and respects fundamental rights. The regulation will apply to providers and users of AI-systems in the EU, regardless of whether they are based in the Union or a third country and will also apply where the output produced by an AI system is used in the EU.

The AIA applies to a wide range of AI systems, including those used in healthcare, transportation, and public safety, among others. The regulation sets out different categories of AI systems and imposes varying levels of obligations depending on the potential risks associated with their use. The AIA also regulates the establishment of a European Artificial Intelligence Board to oversee the implementation of the regulation, issue opinions, and share best practices.

What are the key provisions of the AIA?

The AIA sets out a number of key provisions that will have significant implications for companies that develop or use AI systems in the EU. Some of the key provisions include:

  • Risk-based approach: The AIA adopts a risk-based approach, with different levels of obligations depending on the potential risks associated with an AI system. The regulation sets out four different categories of AI systems, ranging from minimal risk to unacceptable risk.
  • Prohibited AI practices: The AIA prohibits certain AI practices that are considered high risk, including AI systems that manipulate human behaviour, create deepfakes, and use subliminal techniques to influence human behaviour.
  • Transparency requirements: The AIA requires that AI systems be transparent, with clear information provided to users about the system’s capabilities, limitations, and intended use. The regulation also requires that users be informed when they are interacting with an AI system.
  • Human oversight: The AIA requires that AI systems be subject to human oversight, with clear procedures in place for human intervention in the event of a problem or error.
  • Data requirements: The AIA sets out specific data requirements for AI systems, including requirements for data quality, data protection, and data security.

What are the potential implications of the AIA?

The AIA may affect all use of AI systems in the EU. The risk-based approach in the AIA which bans certain AI systems and heavily restrict others, should motivate providers and customers to review the AIA before and during the development and procurement of any products or services incorporating AI systems. Providers may otherwise risk developing products or services which are heavily restricted in terms of further development, commercialization, and use. Although the regulation focuses on providers of AI systems, users of AI systems are also covered by the regulation. We therefore expect increased scrutiny from the customer side and increased due diligence aimed at AI systems before use. To ensure that AI providers and users stay compliant the Commission will carry out market investigations and issue fines to sanction infringements of the AIA.

As an example, one of the issues debated by the European Parliament has been how to regulate general-purpose AI systems such as ChatGPT and DALL-E. In the recent negotiations, it was agreed that such systems must be designed and deployed in accordance with EU law and fundamental rights, including freedom of expression and information. The Members of the European Parliament also agreed that companies deploying such AI systems will have to disclose any copyrighted material used to develop their systems.

What will be the consequences of non-compliance and infringements?

Failure to comply with the AIA may result in significant fines up to 30 000 000 EUR or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year, whichever is higher. The regulation also heavily sanctions the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request related to the regulation, with fines up to 10 000 000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher. Companies also face the risk of reputational damage, which may in some cases far exceed the damage caused by fines.

Timeline

The European Parliament reached a provisional political deal on the regulation on 27 April 2023. The Act may be subject to minor adjustments ahead of a key committee vote scheduled on 11 May 2023, but it is expected to go to a plenary vote in mid-June 2023. The AIA is therefore expected to enter into force in 2023 and assuming a two-year transit period it would become applicable in 2025. Considering the proposal’s relevance to the EEA/EFTA countries and their consistent adoption of comparable measures as well as alignment with EU regulations, it appears probable that the proposal will be implemented in the aforementioned countries.

BAHRs view

When passed, we believe that the AIA will not only influence the development and use of AI inside the EU, but also outside the borders of the EU/EEA, given the influence of the EU. Companies that develop or use AI systems in the EU, or offer AI systems to EU customers, will need to carefully assess the potential risks associated with their systems and ensure that they comply with the obligations set out in the regulation. Given the wide implications of AI within almost all fields and sectors, we believe that the AIA will have large implications for both individuals and businesses globally. The remedies that include fines up to 6% of the company’s total worldwide annual turnover, should motivate companies developing or using AI to perform an assessment of compliance with the AIA before it becomes applicable.

Share aticle to
Loading video ...
close