On 2 August 2024, the EU AI Act officially came into force, marking a significant milestone in the regulation of artificial intelligence within the European Union.
In our article, “We Read the EU AI Act So You Don’t Have To: 10 Essential Takeaways for General Counsels” we provided a broad overview of this crucial AI legislation. Given the extensive scope of the EU AI Act , it is both impractical and insufficient to cover its entirety in a single article. Therefore, we intend to break down the EU AI Act into more manageable topics, offering in-depth analysis through a series of subsequent articles. Our latest publication, “EU AI Act: The Essential Guide to Copyright Compliance for General-Purpose AI Models” delves into copyright compliance specifically related to the training of general-purpose AI models.
In this article, we aim to address another crucial aspect of the EU AI Act: the prohibition of certain AI practices. This topic is extremely important as it concerns AI practices that are strictly prohibited under the EU AI Act, and non-compliance carries severe penalties. The impact of these prohibitions extends to all companies currently developing AI systems, including those established or operating outside the EU, due to the extra-territorial effect of the EU AI Act – as long as the AI systems are intended to be placed, used, or deployed in the EU market, companies must ensure compliance with these prohibitions to avoid significant risks.
The 8 Categories of Prohibited AI Practices
Effective 2 February 2025, the EU AI Act will prohibit 8 broad categories of AI practices. This prohibition will extend beyond EU borders, impacting international AI system providers, including those based in Malaysia seeking to enter the EU market. Understanding these prohibitions is essential for compliance and strategic planning. The 8 categories of prohibited AI practices under the EU AI Act are:
- 1. Manipulative AI Systems
- The EU AI Act prohibits manipulative AI systems, which are defined as AI systems that employ subliminal, manipulative, or deceptive techniques to distort and impair a person’s ability to make an informed decision, thereby leading them to make choices that could cause significant harm.
- The EU AI Act views AI systems that are designed to materially distort human behaviour and cause harm to physical, psychological, or financial interests as dangerous and subject to prohibition. This includes AI systems that use subliminal elements such as audio, image, or video stimuli imperceptible to individuals, or other manipulative techniques that undermine or impair a person’s autonomy, decision-making, or free choice in ways that are not consciously recognized. Even if individuals are aware, they may still be deceived or unable to control or resist these techniques.
- 2. Exploitative AI Systems
- The EU AI Act also prohibits exploitative AI systems. Although there are some overlapping similarities with manipulative AI systems, exploitative AI systems specifically target the vulnerabilities of individuals or groups due to their age, disability, or specific social or economic situations, materially distorting their behaviour in a manner likely to cause significant harm. This includes exploiting the vulnerabilities of individuals in extreme poverty, or those belonging to ethnic or religious minorities. The EU AI Act takes the prohibition of both manipulative and exploitative AI systems seriously, as any AI-enabled practice resulting in significant harm is prohibited, regardless of the provider’s intention.
- 3. Social Scoring AI Systems
- Social scoring AI systems, which are becoming increasingly common in many countries, are prohibited under the EU AI Act. These systems evaluate or classify individuals or groups based on their social behaviour or personality characteristics, with the resulting social score leading to detrimental or unfavourable treatment in social contexts that are either unrelated to the context in which the data was originally generated or collected, or unjustified or disproportionate to the social behaviour or its gravity.
- The EU AI Act considers AI systems that provide social scoring of individuals as potentially leading to discriminatory outcomes and exclusion of certain groups. Social scores obtained from such AI systems may result in detrimental or unfavourable treatment in contexts unrelated to the original data collection or may be disproportionate or unjustified relative to the gravity of the social behaviour. As a result, AI systems involving such unacceptable scoring practices are prohibited.
- 4. Risk Assessment Profiling AI Systems
- Risk assessment profiling AI systems, which make risk assessments of individuals to predict the risk of committing a criminal offense based solely on profiling or assessing their personality traits and characteristics, are also prohibited under the EU AI Act. However, this prohibition does not apply to AI systems used to support the human assessment of a person’s involvement in criminal activity, which is already based on objective and verifiable facts directly linked to the criminal activity.
- In line with the presumption of innocence, the EU AI Act stipulates that a person should not be judged on AI-predicted behaviour based solely on their profiling, personality traits, or characteristics without a reasonable suspicion based on objective, verifiable facts and without human assessment. Therefore, risk assessments carried out to assess the likelihood of offending or predict potential criminal activity solely on profiling should be prohibited.
- 5. Facial Recognition Databases AI Systems
- Facial recognition database AI systems are another common AI tools that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. The use of such systems is prohibited by the EU AI Act because this practice contributes to the feeling of mass surveillance and can lead to severe violations of fundamental rights, including the right to privacy.
- 6. Emotion Inference AI Systems
- Emotion inference AI systems, which infer the emotions of a person in workplace and educational institutions, are also prohibited under the EU AI Act, except for medical or safety reasons.
- The EU AI Act views AI systems identifying or inferring emotions or intentions based on biometric data as potentially discriminatory and intrusive to individual rights and freedoms. In contexts such as the workplace or education, where there is an inherent power imbalance, such systems could result in unfair or harmful treatment. Therefore, the use of AI systems intended to detect emotional states in these settings is prohibited, unless marketed solely for medical or safety purposes.
- 7. Biometric Categorisation AI Systems
- Biometric categorisation AI systems that categorise individuals based on biometric data to infer attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation are also prohibited. However, this prohibition does not cover the lawful labelling or filtering of biometric datasets, such as sorting images according to hair or eye colour for law enforcement purposes.
- 8. Real-time Biometric Identification AI Systems
- A real-time remote biometric identification system refers to an AI system that identifies individuals without their active involvement, typically at a distance, by comparing a person’s biometric data with biometric data contained in a reference database. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited unless it is strictly necessary for one of the following objectives:
- (i) Searching for specific victims of abduction, trafficking, or sexual exploitation, as well as searching for missing persons;
- (ii) Preventing threats to life or physical safety, or preventing a terrorist attack; or
- (iii) Localizing or identifying a person suspected of having committed a criminal offense.
- The EU AI Act views the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for law enforcement purposes as particularly intrusive to the rights and freedoms of the concerned individuals. It may affect the private lives of a large portion of the population, evoke a feeling of constant surveillance, and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.
The Penalty for Non-Compliance
Non-compliance with the prohibited AI practices under the EU AI Act carries severe penalties. Companies found in violation could face administrative fines of up to EUR 35 million or 7% of their total worldwide annual turnover for the previous financial year, whichever is higher.
3 Strategic Actions for General Counsels in Light of the EU AI Act
As the enforcement of prohibited AI practices under the EU AI Act approaches on 2 February 2025, general counsels must take decisive actions to ensure compliance and mitigate risk. The following three key steps are essential:
- 1. Acquire In-Depth Knowledge of Prohibited AI Practices
- General counsels must thoroughly understand the eight categories of AI practices prohibited by the EU AI Act. This knowledge is critical for effective risk management and ensuring compliance. Familiarity with these prohibited practices will enable early identification of potential issues and facilitate proactive risk mitigation.
- 2. Conduct a Comprehensive Internal Audit of AI Systems
- Initiate a detailed internal audit by collaborating with key business units, particularly product development and technology departments. This audit should assess all AI systems and models in development, their intended use cases, and potential impacts. It is crucial to evaluate not only the intended purposes but also the possible effects of these AI systems to identify any practices that may fall within the prohibited categories.
- 3. Develop a Proactive Compliance Strategy
- Should the audit uncover any AI activities that fall under the prohibited categories, especially those targeting the EU market, general counsels should swiftly formulate a compliance strategy. Possible actions include limiting distribution to non-EU markets, modifying product functionalities, or ceasing the development of certain AI solutions.
While the EU AI Act presents new compliance challenges, its phased implementation provides an opportunity to prepare. Immediate focus should be on understanding and addressing prohibited AI practices.
The Technology Practice Group at Halim Hong & Quek is well-versed in technology law, including the EU AI Act, and we are currently providing training to multinational corporations in Malaysia on this subject. Should you require assistance or wish to schedule a more detailed discussion to ensure compliance, please let us know.
About the authors
Ong Johnson
Partner
Head of Technology Practice Group
Technology, Media & Telecommunications (“TMT”),
Fintech, TMT Disputes, TMT Competition, Regulatory
and Compliance
johnson.ong@hhq.com.my
.
Lo Khai Yi
Partner
Co-Head of Technology Practice Group
Technology, Media & Telecommunications (“TMT”), Technology
Acquisition and Outsourcing, Telecommunication Licensing and
Acquisition, Cybersecurity
ky.lo@hhq.com.my.
.
Jerrine Gan Jia Lynn
Pupil-in-Chambers
Technology Practice Group
jerrine.gan@hhq.com.my
More of our Tech articles that you should read: