˂  Back

We Read the EU AI Act So You Don’t Have To: 10 Essential Takeaways for General Counsels

The European Union (“EU”) published the long-awaited European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (“EU AI Act“) on 12 July 2024, and it officially came into force on 1 August 2024.

 

Consistent with the EU’s reputation for comprehensive and detailed legislation, the EU AI Act is notably extensive, spanning 13 chapters and 144 pages. Given the breadth and complexity of this legislation, it is clear that attempting to cover the entire act in a single article would neither do justice to its complexity nor serve the practical needs of businesses. Therefore, this article aims to serve as a preliminary blueprint, highlighting 10 key takeaways that general counsels should note to introduce and understand the EU AI Act. This is by no means exhaustive and these 10 takeaways will provide an introduction to the EU AI Act, with further in-depth articles to follow that will delve into specific topics and obligations.

 

If you think that this may not concern you because you do not operate within the EU, you may want to continue reading, considering that the extraterritorial scope of the EU AI Act is extremely broad. Even if your companies are located outside of the EU, your company may be affected as well, as long as you are operating within the AI value chain. Hence, we trust that this article will be particularly beneficial in helping general counsels better understand (i) what the EU AI Act is about, (ii) what it intends to achieve, and (iii) who should be paying attention to this legislation.

 

Key Takeaway 1: What is the EU AI Act About?

One of the most frequently asked questions is, “What is the EU AI Act all about?” This is an essential question, as it sets the stage for a comprehensive understanding of the EU AI Act’s regulatory scope.

 

The EU AI Act addresses a wide range of issues, from clearly defining “AI systems” and “general-purpose AI models” to laying down its extraterritorial application, prohibited AI practices, and the classification of high-risk AI systems along with the associated requirements. The EU AI Act also outlines the obligations of providers, importers, distributors, and deployers of high-risk AI systems; transparency obligations for AI system providers and deployers; obligations for providers of general-purpose AI models; obligations of providers of general-purpose AI models with systemic risk; AI regulatory sandboxes; and penalties for non-compliance.

 

In essence, the EU AI Act establishes a comprehensive framework for the development, import, distribution, and deployment of AI systems within the EU. Given the extensive scope it covers, as long as you are playing a role in the AI value chain within the EU market, you will likely be governed by this EU AI Act.

 

Key Takeaway 2: Who Does the EU AI Act Apply To?

This leads directly to the second key question: “Who exactly does the EU AI Act apply to?” The scope of this legislation is broad, with extraterritorial effects that extend its reach far beyond the borders of the EU. The EU AI Act applies to seven broad key categories of stakeholders:

 

  1. 1. Providers of AI systems or general-purpose AI models in the EU, regardless of whether they are established or located within the Union or in a third country.
  2. 2. Deployers of AI systems with a place of establishment or location within the Union.
  3. 3. Providers and deployers of AI systems based in third countries where the AI system’s output is used within the Union.
  4. 4. Importers and distributors of AI systems.
  5. 5. Product manufacturers placing on the market or putting into service an AI system together with their product under their own name or trademark.
  6. 6. Authorized representatives of providers not established in the Union.
  7. 7. Affected persons located in the Union.

 

In summary, the EU AI Act generally applies to anyone involved in the development, use, import, or distribution of AI systems in the EU, regardless of where they are based. It even extends to providers and deployers of AI systems that are based outside the EU if the output of the AI system is used within the Union. So, if one is providing AI systems regardless of inside or outside of the EU, and the AI system’s output is used in the EU, it will be caught by the EU AI Act.

 

There are specific exclusions to the scope of the EU AI Act, such as AI systems used for military, defence, national security purposes, or personal non-professional use of AI systems, which we will cover more extensively in a subsequent article.

 

Key Takeaway 3: The Current Status of the EU AI Act and Its Implementation Stages

The EU AI Act was officially published on 12 July 2024, and while it came into force on 1 August 2024, it is important to note that its implementation will only happen gradually, extending over several years.

 

As of the time of writing this article in August 2024, none of the EU AI Act’s requirements and obligations are immediately applicable. The first significant date for all general counsels to take note of is 2 February 2025, when Chapters I and II of the EU AI Act, primarily concerning prohibited AI practices, will take effect.

 

This phased implementation of the EU AI Act is beneficial, given the extensive compliance requirements, and it gives companies enough time to prepare and adapt to the new regulations. That being said, it is essential for general counsels to get ready for the first stage, particularly with regard to prohibited AI practices, which will be further explained below.

 

Key Takeaway 4: Definitions of “AI System” and “General-Purpose AI Model”

To fully understand and appreciate the EU AI Act, it is crucial to first comprehend the definitions of “AI system” and “general-purpose AI model,” as each comes with distinct requirements and obligations.

 

  • • AI System: This is generally defined as a machine-based system designed to operate with varying levels of autonomy. An AI system may adapt after deployment using the information it receives to create outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. The key factor here is the system’s autonomy and its capacity to influence physical or virtual environments.
  • • General-Purpose AI Model: This refers to an AI model trained with large datasets that exhibit significant generality and can perform a wide range of distinct tasks. These models can be integrated into various downstream systems or applications. However, it is important to note that AI models used solely for research, development, or prototyping before market placement are excluded from this definition.

 

From the reading of the EU AI Act, the key difference between an “AI System” and a “General-Purpose AI Model” lies in the use case of the system and its capabilities. For an AI system, a key characteristic is its ability to infer, such as making predictions, content, recommendations, or decisions that can influence physical and virtual environments, derived from inputs or data. In contrast, general-purpose AI models are typically trained on large amounts of data, and while AI models are essential components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further components, such as a user interface, to become AI systems.

 

It is crucial to understand the difference between an AI system and a general-purpose AI model, as different requirements and obligations will apply accordingly.

 

Key Takeaway 5: Prohibited AI Practices

The fifth key takeaway concerns prohibited AI practices, which will be enforced starting 2 February 2025. The EU AI Act outlines a list of AI practices that are strictly prohibited, with limited exceptions. These prohibited AI practices generally include:

 

  1. 1. AI systems that manipulate individuals’ decisions;
  2. 2. AI systems that exploit people’s vulnerabilities due to their age, disability, or specific social or economic situation;
  3. 3. AI systems that evaluate or classify people based on their social behavior or personal traits;
  4. 4. AI systems that predict a person’s risk of committing a crime;
  5. 5. AI systems that scrape facial images from the internet or CCTV footage;
  6. 6. AI systems that infer emotions in the workplace or educational institutions;
  7. 7. AI systems that categorize people based on their biometric data.

 

A subsequent article will be published to provide further details on the AI practices that are prohibited and the exceptions. Given that this is the first set of regulations to be implemented under the EU AI Act, general counsels are advised to pay immediate attention to this particular part.

 

Key Takeaway 6: Understanding High-Risk AI Systems

One of the most critical aspects of the EU AI Act is the classification of high-risk AI systems. Under the EU AI Act, an AI system is considered high-risk if it is intended to be used as a safety component of a product, or if the AI system itself constitutes a product that falls under an extensive list of EU legislation covering diverse areas, including, but not limited to, machinery, toy safety, recreational watercraft, equipment for potentially explosive atmospheres, radio equipment, pressure equipment, cableway installations, and personal protective equipment.

 

Additionally, the EU AI Act classifies AI systems with particular use cases outlined in Annex III of the Act as high-risk. These use cases include biometrics, critical infrastructure, education and vocational training, employment, and access to essential private and public services.

 

A subsequent article will discuss in more detail the specific use cases that are considered high-risk AI systems and their exceptions. For now, it is important to note that, besides ensuring that one does not engage in prohibited AI practices, general counsels should examine whether the AI system falls within the high-risk category, as specific compliance requirements and obligations for high-risk AI systems must be adhered to, which will be further explained below.

 

Key Takeaway 7: Compliance Requirements for High-Risk AI Systems

Once an AI system is classified as high-risk, it must comply with a comprehensive list of requirements under the EU AI Act. These include:

 

  1. 1. Risk Management System: A risk management system must be established, implemented, documented, and maintained as a continuous, iterative process throughout the entire lifecycle of the high-risk AI system.
  2. 2. Data and Data Governance: Training, validation, and testing datasets must be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system.
  3. 3. Technical Documentation: The technical documentation of a high-risk AI system must be prepared before the system is placed on the market or put into service and must be kept up-to-date. This documentation should demonstrate the system’s compliance with the necessary requirements.
  4. 4. Record-Keeping: High-risk AI systems must technically allow for the automatic recording of events (logs) throughout the system’s lifetime.
  5. 5. Transparency and Information Provision: High-risk AI systems must be designed and developed to ensure sufficient transparency, enabling deployers to interpret the system’s output and use it appropriately. Providers must also supply clear instructions, including information about the provider, the system’s capabilities and limitations, and any potential risks.
  6. 6. Human Oversight: High-risk AI systems must be designed to allow effective human oversight, ensuring that humans can intervene if necessary.
  7. 7. Accuracy, Robustness, and Cybersecurity: High-risk AI systems must achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle.

 

Key Takeaway 8: Obligations for Different Operators of High-Risk AI Systems

The EU AI Act also outlines a comprehensive set of obligations for various operators across the AI value chain concerning high-risk AI systems. These obligations encompass the responsibilities of providers, authorized representatives, importers, distributors, and deployers of high-risk AI systems.

 

The specific obligations vary depending on the operator’s role. For instance, deployers of high-risk AI systems must ensure human oversight by appointing individuals with the necessary competence, training, authority, and support. On the other hand, importers are required to verify that the high-risk AI system complies with the Act before it is placed on the market.

 

A subsequent article will lay out the specific obligations for different operators. For now, it is crucial for companies to first understand the role they play, as each role carries distinct legal obligations. Whether a company acts as a provider, authorized representative, importer, distributor, or deployer of high-risk AI systems, it must adhere to the relevant obligation requirements set out by the EU AI Act.

 

Key Takeaway 9: General-Purpose AI Models and Systemic Risk

Besides prohibited AI practices and high-risk AI systems, another key aspect of the EU AI Act is its focus on general-purpose AI models, particularly those classified as having “systemic risk.”

 

As previously mentioned, a general-purpose AI model is defined as one that exhibits generality and can competently perform a wide range of distinct tasks, regardless of how it is marketed. These models can be integrated into various downstream systems or applications.

 

The EU AI Act also introduces the concept of general-purpose AI models with systemic risk. Systemic risk refers to the potential for these AI models to cause actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole.

 

It is essential for companies to understand the distinction between general-purpose AI models and those with systemic risk, as the obligations for providers of general-purpose AI models differ from those for providers of general-purpose AI models with systemic risk.

 

Key Takeaway 10: Transparency Obligations for Providers and Deployers

The final key takeaway from the EU AI Act pertains to the transparency obligations imposed on both providers and deployers of AI systems.

 

Certain AI systems intended to interact with natural persons or generate content may pose specific risks of impersonation or deception, regardless of whether they are classified as high-risk. Therefore, the use of these AI systems should be subject to specific transparency obligations, without prejudice to the requirements and obligations for high-risk AI systems, and subject to targeted exceptions to accommodate the special needs of law enforcement.

 

For instance, the EU AI Act mandates that providers ensure AI systems intended to interact directly with natural persons are designed and developed to clearly inform individuals that they are engaging with an AI system. This requirement is waived only when it is obvious to a reasonably well-informed, observant, and circumspect individual, given the circumstances and context of use. Deployers also have transparency obligations, such as disclosing when AI systems generate or manipulate image, audio, or video content that constitutes a deep fake.

 

Conclusion

This article is not intended to be an exhaustive exploration of the entire EU AI Act but rather a preliminary introduction to its key aspects and implications for the AI value chain and all stakeholders involved. Future articles will explore specific topics within the Act in greater detail, providing more comprehensive insights into its requirements and impacts.

 

For now, general counsels should begin familiarizing themselves with these initial takeaways to better prepare for the challenges and obligations the EU AI Act introduces.

 

This article provides a foundational overview of the EU AI Act and its implications. For a deeper understanding tailored to your specific needs, or to ensure compliance with the Act’s complex requirements, our Technology Practice Group is here to assist. Our team of experts is well-versed in the intricacies of the EU AI Act and is prepared to offer tailored legal advice and training to support your organization. We invite you to reach out to us to discuss how we can collaborate to navigate the regulatory landscape effectively and ensure your compliance with this significant legislation.


About the authors

Ong Johnson
Partner
Head of Technology Practice Group

Technology, Media & Telecommunications (“TMT”),
Fintech, TMT Disputes, TMT Competition, Regulatory
and Compliance
johnson.ong@hhq.com.my

.

Lo Khai Yi
Partner
Co-Head of Technology Practice Group
Technology, Media & Telecommunications (“TMT”), Technology
Acquisition and Outsourcing, Telecommunication Licensing and
Acquisition, Cybersecurity
ky.lo@hhq.com.my.

.

Nicole Shieh E-Lyn
Associate

Technology, Media & Telecommunications (“TMT”), TMT Disputes
nicole.shieh@hhq.com.my


More of our Tech articles that you should read:

Our Services