Artificial Intelligence Act

[8][9] The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation.

[13] The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework.

The approach combines EU-level coordination with national implementation, involving both public authorities and private sector participation.

Legal scholars have suggested that AI systems capable of generating deepfakes for political misinformation or creating non-consensual intimate imagery should be classified as high-risk and subjected to stricter regulation.

[38][37] Experts have argued that though the jurisdiction of the law is European, it could have far-ranging implications for international companies that plan to expand to Europe.

[39] Anu Bradford at Columbia has argued that the law provides significant momentum to the world-wide movement to regulate AI technologies.

[39] Amnesty International criticized the AI Act for not completely banning real-time facial recognition, which they said could damage "human rights, civil space and rule of law" in the European Union.

[41] La Quadrature du Net (LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control".

LQDN described the role of self-regulation and exemptions in the act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI".

[15] Building on these critiques, scholars have raised concerns in particular about the Act's approach to regulating the secondary uses of trained AI models, which may have significant societal impacts.

[42][43] They argue that the Act’s narrow focus on deployment contexts and reliance on providers to self-declare intended purposes creates opportunities for misinterpretation and insufficient oversight.

[42] Some scholars criticize the AI Act for not sufficiently regulating the reuse of model data, warning of potentially harmful consequences for individual privacy, social equity, and democratic processes.