Artificial Intelligence and European initiatives

Artificial Intelligence has the potential to revolutionise many areas of industry and society. For many industries, investments in AI can increase competitiveness and reduce costs, but they can also expose vulnerabilities. As such, these technologies have been both lauded and criticised. Whichever way you look at it, there is no doubt that regulation and oversight tools will need to be developed. In Europe, and Norway, this initiative is being led by the EU which over the past two years has undertaken considerable legislative and policy work to assess how AI should be tackled. Although the end results are far from done, it is worth considering the measures that have been taken and what lies ahead.

Artificial Intelligence, or AI, refers to a variety of systems or methods that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. With computer systems, AI generally refers to the concept of machine learning. As the word implies, the machine learns patterns from data. The way that the machine learns is complex, and when these systems are used for decision-making, it can be difficult to understand the reasons for a given decision. This “lack of explainability”, as the European Parliament has noted, is amongst the difficulties that make policy initiatives and regulation of this field challenging. Nevertheless, the EU has mounted an ambitious and substantial AI strategy which has seen a flurry of legislative work and initiatives over the past two years, with more developments expected.

In early 2017 the European Parliament instructed the Commission to assess the impact of AI. In turn, the Commission initiated a number of measures with a major milestone being the publication of a coordinated plan on a European AI strategy in December 2018. In essence, the coordinated plan supports an ethical, secure and cutting-edge AI that is “made in Europe”, drawing on the scientific and industrial strengths within Europe. Beyond the institutional framework for cooperation, all EU Member States and Norway have also signed a declaration of cooperation on AI in April 2018.

As a starting point, each country shall reflect and report on national AI strategies and objectives in 2019. In Norway, the government kicked off its initiative to develop an AI strategy in February 2019 and launched an open consultation until summer 2019, with the strategy to be in place by the end of the year.

Financing the development of AI is clearly at the forefront of the EU strategy. Indeed, the Commission highlights in its coordinated plan that, as it stands, Europe is lagging in terms of AI investments on a global scale, and the overall objective is to reach the target of EUR 20 billion in both public and private investments over the next decade. In the Commission’s own words this is an “ambitious yet realistic” target, the first step of which will be an increase in the AI funding for the Horizon 2020 programme, in which Norway also participates. Growing funding for start-ups and fostering collaboration between academia and industry is also high on the agenda, which has also garnered support from Big Data stakeholders. Research initiatives will also be tightened across borders and several proposed large-scale test sites will be established.

Beyond these developmental measures, there is also a clear overarching ambition to develop trust in AI, supported by corresponding ethics guidelines. The ensuing legal framework will naturally gravitate around data and the GDPR, referred to as the “anchor of trust” in the single market for data, will be central in this development. A first version of the ethics guidelines were issued in December 2018 and received over 500 comments in the consultation period. Subsequently, revised guidelines were published in April 2019 that set out seven key requirements for AI systems. Essentially, these are linked to oversight, safety, data governance, transparency, fairness and accountability. Yet the process is far from over, key stakeholders are now invited to provide further feedback on how the measures can be implemented or improved, which will be reviewed by the High-Level Expert Group on AI in 2020 and passed on to the Commission in order to propose next steps.

The steps set out by the Commission have also been largely reflected in a resolution on a comprehensive European industrial policy on AI and robotics that was adopted by the European Parliament on 12 February 2019. In particular, the Parliament calls on the Commission to review and re-evaluate existing legislation in order to create a “regulatory environment favourable to the development of AI”. Regulatory measures will essentially focus on developing the internal market, ensuring data privacy and consumer protection, clarifying liability and securing IP rights.

This is clearly an area which is under continuous development with the contours of a more uniform regulatory approach becoming progressively clear. It is also an area of interest for other international organisations. The Council of Europe is developing strategies[2] and modernising legal frameworks[3] to meet the legal challenges that arises with this new technology. Whilst on 22 May 2019, 42 countries, including Norway, adopted OECD principles on AI, which to a great extent echo the framework developed by the EU. A poignant reminder that International convergence and comity will be key to ensure effective regulation of a technology that is inherently prone not to recognise borders.

BAHR will continue to review and advise on the expected developments in AI regulation.

[1] European Commission definition in the AI communication, April 2018
[2] High-level Conference AIFINCOE, https://www.coe.int/en/web/artificial-intelligence/aifincoe-conference-conclusions-english
[3] Modernisation of convention 108, https://www.coe.int/en/web/data-protection/convention108/modernised
Share aticle to
Loading video ...
close