We call it the “AI Act”: it definitly sounds more “Tech” than “European regulation on artificial intelligence”!
This text is at the heart of the “coordinated plan on artificial intelligence (AI)” adopted – like that on sustainable finance – in 2018. This plan defined actions and financing instruments for the adoption and development of AI in all sectors of the economy. At the same time, Member States were encouraged to develop their own national strategies.
And like the action plan on sustainable finance, the one on AI was updated in 2021, in the wake of the COVID pandemic and the Green Deal.
The very first “law” on AI in the world, the regulation should be definitively adopted this Tuesday, May 21 in Brussels. It aims to establish a balanced framework for the development of AI in Europe by reconciling protection of freedom and innovation. It will only come into force fully in 2026.
The text thus proposes a classification of the different uses of AI according to four levels of risk, ranging from authorization without reservation to prohibition:
- low or no risk: video games, anti-spam filters, etc.
- moderate risk: “deepfakes” (falsification of images), chatbot,
- high risk: sorting of CVs, predictive justice, autonomous cars, detection of false documents, granting of bank credit, etc.
- and finally, unacceptable risk: social scoring, subliminal technique, emotion recognition, real-time facial recognition, predictive policing, etc.
There is no doubt that, as in the area of sustainable finance, we will find great minds to explain to us that Europe definitely regulates too much.
Yet, we cannot say that the Tech lobbies have failed doing their job: even at the maximum level of risk, the ban on these technologies is rarely total. We can even say that by imposing a ban accompanied by exemptions, we authorize practices that were previously prohibited. Like real-time facial recognition or emotion recognition, in the security field for example.
Some people are also concerned about another question: the frantic race for the biggest algorithms has few limits other than those of resources. Let’s agree that for the moment within digital technology, AI is not what pollutes the most. However, to have the data necessary for training algorithms, you need sensors: equipment which – you just need to attend a “fresque du numérique” to understand – is responsible for the majority of the pollution produced by digital technology (from greenhouse gas emissions to metal consumption). Even if this very diffuse pollution is still poorly evaluated, it is massive.
In any case, beyond the use that they can already make of AI, companies must now train their management to really think about the use of these technologies. And for that, the existence of a regulatory framework cannot hurt.
Iconographie : extract “2001: Space Odissey” © Stanley Kubrick
After working as an international banker for emerging countries, Laurent Lascols became global head of country risk / sovereign risk (from 2008 to 2013) then global director of public affairs (from 2014 to 2019) for Societe Generale. Since early 2023, he is managing partner at ARISTOTE, an advisory firm and training organization dedicated to corporate social responsibility, sustainable finance and impact finance.