Technology | Key developments following the AI Act
Norway signed convention on Artificial Intelligence
The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“the AI Convention”) is the first legally binding international agreement on AI and is a part of the EU’s efforts to establish broad international regulations on the use and development of AI. Over 70 countries have participated in the negotiations, and the AI Convention is currently signed by the EU, the UK, the US, Israel, Norway, Andorra, Georgia, Iceland, the Republic of Moldova, and San Marino. The AI Convention was signed by Norway 5 September 2024, during an informal conference of the Council of Europe’s Ministers of Justice.
The purpose of the AI Convention is to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law. The AI Convention places obligations on the signatory states to adopt or maintain appropriate measures to give effect to the provisions of the AI Convention.
Like the AI Act, the AI Convention establishes a risk-based approach to AI. Based on identified risks, the parties to the AI Convention must implement measures for assessment, prevention and mitigation of risks posed by AI. Each party must adopt measures that ensure transparency and oversight for artificial intelligence systems and ensure accountability and responsibility for any activities from such a system. The parties to the AI Convention must additionally ensure that any activities from an AI system respect equality and prohibition of discrimination.
The obligations in the AI Convention will apply to activities within the lifecycle of artificial intelligence systems, both in the public and private sector.
The AI Convention is consistent with and will be implemented in the EU by means of the AI Act. The AI Convention includes a number of key concepts from the AI Act, including the forementioned risk-based approach, as well as transparency- and risk management obligations.
The AI Convention also opens for the parties to ban certain uses of AI, where it is considered that such uses may be incompatible with the respect for human rights, the functioning of democracy or the rule of law. The interconnection with the AI Act underscores the EU’s significant role in policy making and regulatory approach to AI.
AI Pact initiative
The AI Act entered into force on 1 August 2024, with its rules being implemented in stages. While the prohibitions on AI systems deemed unacceptable will come into effect as early as 1 February 2025, the obligations for high-risk AI systems will come into force in approximately 34 months.
In this context, the AI Pact is part of the EU initiative on AI regulations, and the purpose of the AI Pact is to prepare organisations for implementation of AI Act measures. The AI Pact is based on the AI industry’s voluntary commitment to implementations of the obligations in the AI Act ahead of the legal deadline. The AI Pact is aimed at assisting both EU and non-EU organisations in early adoption of the measures.
The AI Pact is structured around two pillars. The first pillar concerns establishing a community in which the dedicated organisations can exchange experiences and knowledge. In practice, this is done through organising workshops for organisations that have expressed an interest in the AI Pact’s network, and through creation of a dedicated online space for exchanging best practices.
The second pillar is aimed at encouraging AI system providers and deployers to prepare early for compliance with the AI Act. In this context, the organisations are encouraged to disclose their processes and routines for ensuring compliance with the AI Act. The commitment is formalised through pledges to take actions, which includes actions like assessing the risks of an AI system that is being developed. The pledges will be made public by the European Commission.
General-purpose AI code of practice
The AI Act mandates the European AI Office to facilitate and encourage the drawing up of codes of practice. The European AI Office is the central hub for AI expertise across the EU and is established within the Commission. In September, the European AI Office initiated work on the general-purpose AI Code of Practice (“Code of Practice”), a work which is scheduled to be finalised in April 2025. The purpose of the Code of Practice is to ensure proper application of the AI Act, and that providers can rely on codes of practice to demonstrate compliance. Though the Code of Practice will not be legally binding, it will serve as a presumption of conformity, as they will give guidance on how companies can ensure compliance in the absence of harmonized standards.
In drawing up the Code of Practice, the European AI Office may invite providers of general-purpose AI models and relevant national competent authorities. Additionally, civil society organisations, industry, academia and other relevant stakeholders may support the process.
The European AI Office is currently in the process of preparing the Code of Practice in collaboration with selected stakeholders who are participating in the Code of Practice Plenary. The Plenary has been organised into four working groups, each focusing on the following themes:
The Code of Practice will be developed through an iterative process. Providers of general-purpose AI will be given the opportunity to give feedback on the codes before they are approved.
BAHR’s view
The AI Convention marks a significant step towards establishing comprehensive and consistent international regulations for AI. The AI Pact underlines the urgency of consistent regulations through preparing organisations for the implementation of measures, and the Code of Practice may assist developers and providers of general-purpose AI models to comply with the AI Act. A concern, however, is that the supplementary frameworks add complexity and may increase the compliance burden of organisations that are already subject to the AI Act and other comprehensive EU regulations, such as the GDPR.