As artificial intelligence (AI) becomes more integrated into our everyday lives, the EU has adopted comprehensive legislation aimed at regulating the use of AI – the so-called AI Regulation. Born in 2021, the EU AI Act divides the technology into categories of risk, ranging from “unacceptable” — which would see the technology banned — to high, medium and low hazard.

Concerns have been mounting over the potential for abuse of artificial intelligence, even as heavyweight players like Microsoft, Amazon, Google and Graphics Card maker Nvidia beat the drum for AI investment.

On 13 March, the AI regulation was finally voted through in the European Parliament. This means that the 27 EU member states formally agree on the regulation, which provides a common EU set of rules for the use of artificial intelligence.

The AI Regulation will come into effect in stages with implementation from mid-2024 to mid -2027. The requirements for prohibited practices will already be in effect 6 months after entry into force. The requirements for general AI purpose models are expected to enter into force in May 2025, followed by requirements for high-risk systems in 2026 and 2027.

The timeline for prohibited practices looks like this:

6 months for prohibited practices.

12 months for so-called general AI models (e.g. ChatGPT).

24 months for requirements for high-risk systems in Annex III.

34 months for high-risk system terms/harmonised products in Annex II.

More information on the EU AI Act can be found on the EU website here. The website of the Confederation of Danish Industry (DI) contains excellent guides and explanations of the new EU act. DI is a private business and employers’ organisation representing approximately 20,000 companies in Denmark.