Demystifying the Draft EU Artificial Intelligence
Demystifying the Draft EU Artificial Intelligence Act
Pre-print, 06/07/2021
Par Michael Veale et Frederik Zuiderveen Borgesius, deux universitaires spécialistes du droit des données et de l’IA.
Si on va à l’essentiel, selon les auteurs, l’AI Act (une proposition de règlement européen, si on parle clairement) se décrypte en résumé en :
- une interdiction de trois types d’IA dont deux sont définies de manière très, très circonscrites et la dernière laisse les Etats membres y faire exception pour toute application de type sécurité/militaire :
- manipulation (sur sujet n’en ayant pas conscience) mais uniquement si préjudice et si intention
- social scoring public (mais pas en privé/en entreprise)
- identification biométrique (incl. reconnaissance faciale). [La France, notamment, veut continuer à mettre des caméras partout, because lutte antiterroriste etc.]
- une régulation des IA à haut risque (définies d’une drôle de manière pas simple) par des « exigences essentielles ». Mais en réalité, c’est une normalisation : ces exigences essentielles seront traduites dans des normes (payantes) par CEN / CENELEC pour assurer une application simple et uniforme du droit européen et seules ces normes seront appliquées. Sans contrôle du Parlement européen et sans regard tiers ou presque
- une dérégulation in fine de la vente et l’usage des IA non-à haut risque, car le champ de l’AI Act est "les IA" en général, avec une définition de l’IA très large. Avec donc les Etats membres empêchés d’intervenir dans le domaine (harmonisation maximale). [Et donc la France serait obligée de supprimer l’obligation de transparence et explicabilité des algorithmes utilisés dans toute prise de décision individuelle.]
Extraits saignants du thread Twitter :
« Reading the Act at surface level will typically misinform.
High-risk systems. This regime is based on the ’New Approach’ (now New Legislative Framework or NLF), a tried-and-tested approach in product safety since 1980s. It is far, far from new. Most parts are copy-pasted from a 2008 Decision. The idea of the NLF is the opposite of pharmaceutical regulation. There, you give the EMA/FDA docs to analyse. For the NLF, you do all the analysis yourself, and sometimes are required to have a third party certification firm (’notified body’) check your final docs. Before the Act becomes enforceable, the EC plans to ask two private organisations, CEN (European Committee for Standardisation) and CENELEC (European Committee for Electrotechnical Standardisation) to turn the essential requirements into a paid-for European ’harmonised standard’. If a provider applies this standard, they benefit from a presumption of conformity. No need to even open the essential requirements. Standards are not open access ; they cost 100s of EUR to buy (from your national standards body). The European Parliament has no veto over standards, despite an alternative route to compliance. CEN/CENELEC have no fundamental rights experience.
An important and underemphasised part of the law : pre-emption and maximum harmonisation. When an EU instrument maximally harmonises an area, Member States cannot act in that area and must disapply conflicting law. This is a BIG DEAL for the future of AI policy. The core problem is that the AI Act’s scope is all ’AI systems’ even though it mainly puts requirements on ’high risk’ AI systems. The paper has a lot more detail but essentially this means that Member States LOSE ability to regulate normal AI. Under the AI Act, Member States are also unlikely to be able to freely to regulate the *use* of in-scope AI.
France arguably may have to disapply its laws around the public sector requiring more detailed transparency for automated systems [obligation de transparence dans la loi République numérique sur les traitements algorithmiques impliqués dans une décision], for example (in the French Digital Republic Act). The AI Act may override it.
There are big fines, sure—6% of global turnover for breaching prohibitions/data quality requirements. But these are issued by Market Surveillance Authorities (MSAs), who are often non-independent government depts from NLF-land.
Individuals have no newly created right of action under the AI Act. There are no complaint mechanisms like in the GDPR.
Conclusion : The EU AI Act is sewn like Frankenstein’s monster of 1980s product regulations. It’s nice it separates by risk. Its prohibitions & transparency provisions make little sense. Enforcement is lacklustre. High risk systems are self-assessed, rules privatised. Work is needed.
In the words of an answer to the thread : Product market legislation, labelling and standards are the ’alpha and omega’ (core perspective) of the AI Act. »