Technology | The Artificial Intelligence Act has been approved
The first proposal for the AI Act was published by the European Commission in 2021 and was the world’s first proposal for a general risk-based AI legislation. This and later proposals have been subject to extensive discussion in the tech sector by legal practitioners, scholars, politicians and advocacy groups. In this article, we will provide a short overview of the foundations and scope of the AI Act, as well as our perspective on the Act’s implications.
The AI Act at a glance
Overview
The AI Act is the world’s first legal framework specifically designed to regulate artificial intelligence. Its purpose is to promote the uptake of human-centric and trustworthy AI, while ensuring a high level of protection of health, safety, and fundamental rights. This purpose is mainly sought accomplished through obligations for providers and deployers of AI systems and general-purpose AI models.
The obligations vary depending on the risk level of the AI systems in question. The AI Act distinguishes between four risk levels:
- Unacceptable risk (banned used of AI)
- High risk
- Limited risk
- Minimal risk
The highest standard is set for providers of high-risk AI systems, who i.a. need to comply with the following main requirements:
- Establishing, implementing, documenting, and maintaining a risk management system
- Ensuring that the training, validation, and testing of datasets are subject to proper and appropriate data governance and management practices.
- Draw up and keep up-to-date technical documentation.
- Enabling the high-risk AI system to technically allow automatic recording of events (‘logs’)
- Ensuring sufficiently transparency
- Ensuring that human oversight of the system is possible.
- Ensuring an appropriate level of accuracy, robustness, and cybersecurity
The AI Act also imposes obligations on importers, distributors, and deployers of high-risk AI systems.
Obligations for limited-risk AI systems are significantly lighter and primarily consist of transparency obligations. The AI Act also contains specific requirements for general-purpose AI models, which include risk mitigation measures and transparency with regards to content used for training.
General Purpose AI and the AI Act
The release of general-purpose AI systems such as ChatGPT in 2023, served as an initial stress test for the regulations outlined in the EU Commission’s proposal. Despite a high degree of consensus on the inherent risks of such systems, this led to discussions on omissions in the initial proposal from the EU Commission.
An important change in the adopted version of the Act, compared to the 2021 proposal, is the introduction of two new classes of AI systems: general-purpose AI models (GPAI) and general-purpose AI models with systemic risk. These categories aim to address the evolving challenges posed by the advent of general-purpose AI platforms, such as OpenAI’s ChatGPT.
Putting the AI Act into context
The AI Act is part of a comprehensive package of policy measures designed to support the development of trustworthy AI within the EU. This package includes, in addition to the AI Act, the EU AI Innovation Package and the EU Coordinated Plan on AI. The Coordinated Plan on AI, first published in 2018, represents a joint commitment by the Commission, EU Member States, Norway, and Switzerland to maximize Europe’s potential to compete on a global scale. Furthermore, in 2022, the Commission proposed the AI Liability Directive, which aims to establish uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems.
Existing regulations may also affect the use of AI, such as the GDPR when AI systems process personal data. The AI Act contains clarification on the relationship between the AI Act and the GDPR, notably in regard to risk assessments.
Next steps
The AI Act is entering into force twenty days subsequent to its publication in the Official Journal. Following its entry into force, the rules of the AI Act will become applicable in stages:
Developments in Norway
In the fall of 2023, a Norwegian government working group assessed that the AI Act is relevant to the EEA. The AI Act is expected to be incorporated into Norwegian law following its adoption.
Although we anticipate that the implementation of the AI Act in Norway may take some time, Norwegian companies that develop and offer AI systems within the EU must comply with the AI Act from the time its various parts become applicable. The same applies to Norwegian companies that put their name or trademark on high-risk AI systems, make substantial modifications to such systems, or modify the intended purpose of an AI system.
Furthermore, independent initiatives have been launched to boost innovation and research in AI in Norway. Last fall, the Ministry of Education and Research announced a significant investment of NOK 1 billion over five years to fund research in AI, machine learning, and digital technology. This initiative, administered by the Research Council of Norway (No: Forskningsrådet), is set to kick off this year, with the first application deadline in June 2024.
BAHR’S view
The AI Act is a landmark regulation, and its risk-based approach has already inspired similar regulations globally. We anticipate that the AI Act will continue to spark debate and be subject to comprehensive analysis, when its provisions are to be interpreted and put into practice.
For Norwegian companies, the AI Act will affect the AI systems offered on the market and the use of AI technology in general. All companies who develop or use AI should assess how the new rules impact them, and be cognizant of signals from regulatory authorities, changes in the contractual framework of the dominating IT companies to account for AI features, as well as new forms of and use of AI.