˂  Back

The European Union Artificial Intelligence Act – Should Artificial Intelligence Be Regulated?

Since the European Union Artificial Intelligence Act (the “EU AI Act”) was proposed by the European Commission in 2021, the European Parliament and Council have finally reached a provisional agreement on the final version of the EU AI Act on 9 December 2023. The final text of the EU AI Act will go through technical review and refinements before being released to the public. From the European Parliament’s press releases however, one can have some preliminary idea of the general scope of the EU AI Act.
Should Artificial Intelligence be Regulated?
The EU AI Act is often touted as a “global first” legal framework for the regulation of artificial intelligence (“AI”) with clear rules for its usage. This definitely begs the question “Should AI be regulated?”. Consensus reached on this question seems to be skewed largely towards a “YES”, when even the industry players, the technology companies and the developers of AI themselves are calling for regulation, or at least some industry standards as to the ethical and safe development of AI.
The reasoning for regulation goes beyond doomsayers’ fear of AI potentially dominating humanity or even destroying it, like what we saw in The Terminator franchise. What is actually driving the call for regulation is much more imminent – ethical concerns as well as safety and security reasons. Depending on the data sets used to train an AI model, its usage may cause discrimination against marginalised group of people (e.g., rating a person with darker skin tone as being more likely to default on loan, or a facial recognition AI model that cannot recognise certain skin tone as well as it does the others). Inappropriate usage of AI may also cause the spread of misinformation and disinformation or wrongful arrest of suspects by law enforcements.
In the face of these imminent threats of AI, regulation seems necessary to provide a guardrail in ensuring the development of ethical and safe AI, which is what the EU AI Act sets out to achieve.
The EU AI Act: A friend or a foe?
Regulations on AI must be delicately crafted – too stringent, it may become a stranglehold that stifles innovation and development; too loose, it may become a stingless bee. The EU AI Act’s solution to a balance in regulation can be seen in its risk-based approach to AI.
To start with, the EU AI Act seems to adopt a neutral and broad definition of “AI systems” that is aligned with what was proposed by the Organisation for Economic Co-Operation and Development: “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments”. Within this definition of AI systems, AI is further categorised based on the risk level the AI system poses: (i) minimal / no risk (e.g., AI-enabled recommender systems in Netflix and Instagram); (ii) limited risk (e.g., simple chatbots and AI-enabled sorting systems); (iii) high risk (e.g., AI-enabled medical devices, AI for law enforcement purposes, etc.); and (iv) unacceptable risk (predictive policing AI, AI that processes sensitive personal information such as sexual orientation, political and religious beliefs). Depending on the category of the AI systems, they are subject to different levels of scrutiny. The AI systems with the highest level of risk are banned outright; whereas those with acceptable and manageable risks are subject to high-level of oversight and reporting requirements; while those with low to minimal risks are being given a free-pass or with simple obligations to at least inform its users that they are dealing with AI generated content.
The EU AI Act also seeks to impose additional obligations in respect of “general purpose AI systems” (“GPAI”) – AI systems that have a wide range of possible uses, both intended and unintended by the developers (think ChatGPT, Dall-E, Bing AI, PaLM). Deployer or provider of these GPAI may be required to conduct risk evaluation of the AI systems before launch, disclose the source of the training data set, monitoring and reporting on its energy efficiency, conducting red teaming, etc. These additional guardrails on GPAI appear to seek to prevent unauthorised exploitation of third-party work that have been made available online, minimise unintended usage of the GPAI, and to address ESG concerns posed by proliferation of large language models.
Scope of Applicability of EU AI Act
Based on the version of the EU AI Act that was proposed by the European Commission back in 2021, the EU AI Act was intended to have an extraterritorial application. In addition to users and providers of AI systems who are based in the European Union, providers and users of AI systems that are based outside of EU but the output produced by their AI systems are used in the EU are also subject to the EU AI Act. If this scope of the EU AI Act in the proposal draft makes its way to the final text, the EU AI Act will have an overarching reach and as long as an AI system is to be used in the EU, compliance with the EU AI Act will be compulsory. Failure to comply with the EU AI Act may attract fines based on a certain percentage of the violator’s global annual turnover.
As one of the first (if not the first) comprehensive regulations on AI, the EU AI Act will likely become the model of similar regulations in many other countries and influence how the rest of the countries around the world shape their AI legal framework. Deployers and builders of AI systems outside of the EU will definitely be paying close attention to the implementation and enforcement of the EU AI Act in the EU. We would even recommend that the deployers and builders of AI systems outside of the EU benchmark their AI models and practices against the EU AI Act, in anticipation of similar rules being drawn up closer to home.
It is no doubt that AI is a powerful tool with wide ranging possibilities of applications in our daily lives. It can affect our social behaviour, determine which candidates get hired, improve accessibility to medical treatment, and impact human lives in many other ways. Like it or not, the technology is here to stay. To ensure the ethical and safe development of the technology, regulation is inevitable. Industry players should not see regulation as a force against innovation, but rather a guardrail to foster and nurture sustainable growth of the technology to maximise its potential for the betterment of humankind.
To better understand the regulatory landscape in relation to AI, or if you need legal assistance in adopting or deploying AI in your organisations, our team of experts is ready to help. Feel free to reach out to us for further information or to schedule a discussion. We look forward to being your trusted partner on your digital transformation journey.

This article is intended to be informative and not intended to be nor should be relied upon as a substitute for legal or any other professional advice.

About the authors

Lo Khai Yi
Co-Head of Technology Practice Group
Technology, Media & Telecommunications, Intellectual
Property, Corporate/M&A, Projects and Infrastructure,
Privacy and Cybersecurity
Halim Hong & Quek

Ong Johnson
Head of Technology Practice Group
Transactions and Dispute Resolution, Technology,
Media & Telecommunications, Intellectual Property,
Fintech, Privacy and Cybersecurity

Our Services