Navigating the legal terrain of AI: a guide for in-house legal teams

AI is transforming industries at a breakneck speed. At the same time it’s a challenge for in-house legal teams in the EU to keep up, or to even decide what to keep up with. When and how should legal teams be involved in the AI initiatives of their businesses? This guide is made for in-house legal teams with such questions.

By Annemarie Bloemen

Expertise: Data & Digital Services

17.06.2024

Navigating the Legal Terrain of AI: A Guide for in-house Legal Teams

Artificial intelligence (AI) is transforming industries at a breakneck speed. At the same time it’s a challenge for in-house legal teams in the EU to keep up, or to even decide what to keep up with. When and how should legal teams be involved in the AI initiatives of their businesses?

This guide is made for in-house legal teams with such questions. This guide should assist them to take an appropriate and realistic role in the AI transformation – legal teams enabling the business to stay within the legal/ethical boundaries, without those same legal teams being overwhelmed or being seen as a roadblock. This guide should also help with developing the AI company strategy and setting up an AI compliance program.

This guide starts with a summary of the requirements of the AI Act and other legal requirements which may be applicable to the AI systems within your company (Part I). After that, we describe our take on what in-house legal teams can do to assist their businesses with AI compliance (Part II).

I. UNDERSTANDING THE LEGAL FRAMEWORK AROUND AI

 1. The AI Act

The upcoming AI Act, roughly effective from 2026, is mainly a product regulation. As you can read in the summary below, most AI Act requirements are aimed at high-risk or systemic-risk AI systems. If an AI system in your company falls within those categories, there is much to do, even if you are ‘just’ deploying such system (‘deploying’ is the term for setting up an AI system for operation and use). The extensive AI Act requirements include items such as all-embracing risk assessments, continuous testing, data quality and governance, accuracy, robustness and cybersecurity.

AI systems considered medium or low risk under the AI Act, such as your chatbot or content generator and including your General Purpose AI (GPAI), will probably require transparency or integration of a digital ‘content generated by AI’ marking.

Below you will find an in-depth summary of the main requirements of the AI Act. Note that we left out items like the definition of AI and the territorial scope of the AI Act, just assume you fall under it for now. We also let out other things like the various AI authorities and EU AI governance, enforcement, penalties and AI liability. While equally important, we chose to focus on the positive side of the AI Act – ‘what do you need to do, to do it right’

1.1. Prohibited AI systems
The AI Act prohibits a number of specific AI systems in the EU market. These systems are by default deemed harmful in relation to safety, society or EU fundamental rights or values. This includes systems such as:
1.2. High-risk AI systems
For high-risk AI systems, the AI Act requires conformity assessments, preceded by extensive requirements, for EU market access. When conformity is confirmed, the AI system is equipped with a CE marking.

1.2.1. Which AI systems are deemed high-risk? 
The high-risk qualification follows from the impact of these AI systems on health, safety or fundamental rights. The current list of high-risk AI systems – which can be updated by the EU Commission where necessary – includes:

   a) AI technologies integrated into products already regulated due to the possible risks they encompass such as medical devices, machinery, vehicles (including cars), toys, lifts and aircrafts.

   b) AI systems that could potentially harm public interest, including safety, fundamental rights, democracy, and the rule of law. This category includes:


According to a filter provision added in the final version of the AI Act, category b) AI systems are not considered high-risk if they do not pose a significant risk, for instance because a decision-making functionality is not materially influenced by the AI system. This includes AI systems used for narrow procedural tasks, AI systems used to improve the result of a previously completed human activity and AI systems used to detect decision-making patterns or deviations and not meant to replace a human assessment. Category b) AI systems used for profiling of natural persons will always be considered high-risk. Providers relying on this filter will be required to document and demonstrate their assessment.

1.2.2. Providers should ensure compliant technology
To pass the required conformity assessment, high-risk AI systems should comply with a range of extensive requirements including: It is the provider of the high-risk AI system who is required to ensure that the AI system for which it is considered provider complies with these requirements and to set up an extensive quality management program. Modifying an existing AI system into high-risk, including GPAI, makes you a provider.

1.2.3. Deployers should ensure compliant use of technology
When deploying high-risk AI systems, maintaining compliance involves: 1.3. Medium / Low risk AI Systems - Human Interaction and Content Generation
In cases where AI systems interact with humans or generate content (e.g., text, images or deepfakes), providers have an obligation to ensure people are informed that they are interacting with AI, and content is digitally marked as AI generated.

Deployers should inform people that they are exposed to an emotion recognition/biometric categorization system. Deployers of deepfakes should disclose the AI manipulation, with limitations applying to e.g. AI systems generating art or used in law enforcement.

1.4. General-Purpose AI Models
1.4.1. What are they?
AI models are part of AI systems, which also include the infrastructure and processes required to support the AI operations and run the models. GPAI models are designed to perform a wide range of tasks across various domains, rather than being tailored for a specific function. These models are usually trained self-supervised, unsupervised or with reinforcement learning and based on large amounts of data. Its most famous example is the large language model which is the basis of ChatGPT, but GPAI models are also included in image recognition software and general machine learning frameworks.

1.4.2. General requirements for providers of GPAI models
Providers of GPAI models should keep technical documentation on testing and evaluation of their GPAI model and inform deployers of the capabilities and limitations of the GPAI model. Free and opensource GPAI models are exempted from these obligations.

Providers of GPAI models should also implement a policy to comply with EU copyright law and have a public summary available on the content used for training the GPAI systems.

1.4.3. GPAI models with systemic risks – additional requirements for providers
Additional requirements apply to providers of powerful GPAI models which are considered to encompass systemic risks. Such systemic risks can originate from the high impact capabilities of the GPAI model, the amount of computation required, the amount or nature of the training data or the large number of registered users. This is something which would apply to e.g. ChatGPT or Gemini. Such GPAI models are deemed to have actual or reasonably foreseeable negative effects on e.g. democratic processes, public and economic security and the dissemination of illegal, false, or discriminatory content.

Under the AI Act, providers of such GPAI models have extensive requirements in the field of notifying the EU Commission, testing and evaluating their models, and assessing and mitigating risks.

1.5. AI Act provisions supporting innovation
Other than the protection of e.g. humans, fundamental rights and democracy through product regulation, the purpose of the AI Act is to support and protect innovation. This is shown through a number of rules around facilitating testing in regulatory sandboxes or real world settings.

More importantly, SMEs and start-ups have priority with sandbox testing and are exempted from a number of (administrative) requirements, such as the technical documentation requirement for high-risk systems. Members States are required to provide SMEs and start-ups with support, including access to training, awareness and other guidance on the application and implementation of the AI Act.

 2. Additional regulations, laws or legal issues around AI

2.1. Falling outside of, or only partly within, the AI Act
As you have read, the largest part and most comprehensive requirements of the AI Act are aimed at high-risk AI systems or systemic-risk GPAI models. Given this somewhat limited scope, you may very well only have some transparency obligations under the AI Act or obligations to mark content as AI generated.

The above does not mean that AI systems falling outside of the AI Act, or only in the medium/low risk part, are off the hook legal-wise. Probably not. This Part II explains this further.

2.2. General Data Protection Regulation (GDPR)
As said, the AI Act is mainly a product regulating act. While promoting ‘human-centric AI’ is one of the purposes of the AI Act, its rules only indirectly protect individuals from AI harm. Individual redress is not regulated through the AI Act, as opposed to the GDPR.

Naturally, the GDPR will always apply when your AI systems process personal data. Compliance with the GDPR in scope of AI systems involves continuous safeguarding principles like purpose limitation, data minimization, accuracy and transparency. This can become a challenge when working with AI and may in any case require your organization to perform a DPIA, either as part of your FRIA or as a standalone assessment.

The AI Act strengthens the rights of individuals in relation to automated-decision-making (now only a short provision in the GDPR) if such technique is used in the field of high-risk operations, such as education, border control, essential services and law enforcement. In such case, the extensive requirements (see Part I under 1.2.2. and 1.2.3) for high-risk AI systems apply to providers and deployers.

2.3. Data Act
The EU Data Act focuses, amongst others, on data generated by Internet of Things (IoT) devices – aiming to stimulate innovation and competitiveness through data sharing.

Users of IoT devices have a claim against the data holder (i.e. the provider of the IoT device) for the provision of data generated using that IoT device. They can also request the data holder to transfer the data to a third party. When an IoT device is equipped with an AI system, the user's right to data provision also includes the data generated by that AI system.

2.4. The Digital Services Act (DSA)
You can read more on the DSA in our Navigating the DSA guide. The DSA requires online platforms to be transparent about, and ensure fair operation of, AI systems used by those online platforms to deliver their services or to comply with the DSA. This includes recommender systems, content moderation and complaint management operating on AI.

2.5. Sector-specific Regulations
Different industries have their own regulatory frameworks that may impact AI systems. For example, the financial sector
has strict regulatory obligations to prevent fraud and protect consumers and to demonstrate their compliance with these regulations. This must be considered when deploying AI systems in this field, where e.g. explainability will be an item to consider.

2.6. General legal considerations
Even when the legislation described above does not apply, your organization must still consider general legal principles, e.g. in the form of:
2.7. Fundamental Rights
All parties in the AI value chain, from the provider to the end-user, should consider the broader implications of their AI systems on fundamental rights, even if the nature of the AI system is such that it is does not require a FRIA under the AI Act. People still do not like to be discriminated, or to be subject of a creepy bias.

2.8. Ethics, Societal Impact, Geopolitics, etc…
Even if an AI system does not have any direct consequences for individual rights, there may still be ethical consequences or consequences for broader public interests to consider.

What are the consequences of workplace AI for the availability of jobs? What are the consequences of AI in art for human creativity? What are the consequences of the lack of culturally diverse source data used to train the ChatGPT and other large language models we currently work with? What monopolies will arise if the battle for AI talent is won by big tech with deep pockets?

We can go on and one with this list and have no answer. For many, these items are no concrete issue, but when ignored they will become a concrete issue for many.

Meaningful and independent checks & balances are key. A FRIA, or responsible or ethical AI impact assessment can guide you through (part of) these questions and, depending on the nature of your business and its risk appetite, you can include more questions in your template AI impact assessment questionnaire which address these issues.

II. THE TASKS OF THE LEGAL TEAM

Now that you have a flavour of the legal and ethical framework around AI, you can get to work. The below step-plan contains our view on how to best handle this.

Before you start, keep in mind that it is not the legal team’s job to make sure that the business is compliant, that is the job of the business. The job of the legal team is to ensure that the business can be compliant.

5 step-plan to allow the business to be compliant with the AI legal framework
    Step 1: Talk to the board
    Provide your board with a comprehensive overview of the legal framework around AI. Where relevant, indicate the broader issues, e.g. on ethics and society. Talking with the board works both ways: Step 2: Provide input to the company’s AI strategy, initiate if not there already
    Your knowledge on legal implications and potential risks can be integrated into your organization’s overall AI strategy. Initiate an AI strategy if there is not one already.

    Step 3: Join your company’s AI governance board, initiate if not there already
    Having an AI governance board with diverse stakeholders is important to ensure appropriate checks & balances in your company’s development and deployment of AI. 

    Other members of this board will, depending on the size and technological scale of your company, include representatives of e.g., IT, architecture, data, business, operations, risk and security.

    Step 4: Talk to the business and gain common understanding
    To effectively advise the business on AI compliance and equip them with efficient tools & training, you also need a comprehensive understanding of their current AI use cases and future ambitions. Initiate discussions with key stakeholders in each department to understand the specific applications of AI, the data sources they rely on, and the objectives they aim to achieve through AI. 

    This includes meeting with product development teams to learn about AI-driven innovations, speaking with marketing and sales to understand customer data analysis and possible needs from the customer side, and collaborating with operations to see how AI optimizes workflows and processes.

    Again, this works both ways - the business will understand where you are coming from for your step 5.

    Step 5: Equip the business
    After gathering insights and support from steps 1-4, you are ready to design the AI compliance program. Note – the is not solely a legal program, especially not if the AI Act is applicable to AI systems in your company. Get the right expertise onboard. An AI compliance program will be a joint effort with experts on e.g., project management, data science, quality management, information technology, risk management and, if you are really lucky and this is a separate role, ethics. You may consider using AI to assist your company with its AI compliance program.

    Here are the subjects the legal/ethical part of your AI compliance program will need to tackle:

    1, AI Governance
    Roles and responsibilities: Ensure the right roles are imposed with the appropriate obligations in the field of AI compliance. This should make sure that the necessary legal/ethical checks & balances are performed at the right time in the AI life cycle.
    2. Create policies, procedures and templates
    AI Policies: AI policies should include legal/ethical principles, boundaries and do’s and don’ts to be used in the daily business.  
    AI contracting templates: Contracting templates or guidelines can help the business with procuring or licensing AI systems with appropriate contractual and legal arrangements.
    3. Raise awareness
    Employee training: Organize training sessions to enhance employees' understanding of legal/ethical AI compliance. Emphasize the importance of adhering to legal and ethical standards.
    Designate points of contact: Identify and train key personnel within the organization to have a more in-depth understanding of AI compliance. These individuals will act as points of contact, offering guidance and support to their colleagues on a case-by-case basis.
    Enlist experts: Engage external experts to bring additional perspectives and to keep the organization informed of the latest developments in AI compliance.

    4. Provide tools
    Compliance checklists: Create checklists for the business to use for legal/ethical AI compliance checks. Use visual aids to demystify complex legal concepts for non-legal staff.
    Risk assessment tools: Supply the business with tools that assist in the necessary AI risk assessments, including FRIAs and DPIAs. During such assessment, you will be a stakeholder on the table. 
    Monitoring and reporting tools: Where required given the size and technological scale of your company, audit and monitoring should be part of the AI compliance program. Conclusion
    The journey to effectively manage AI’s legal challenges is continuous and dynamic. It is not just about ensuring compliance but enabling the business to thrive within the legal boundaries. By understanding the legal framework, engaging with the board and business, and equipping the organization with the right tools and knowledge, you can be in control of its legal and ethical AI risks. Happy to help - do not hesitate to contact Annemarie Bloemen for advice and support.


    Annemarie Bloemen

    Data & Digital Services

    Become part of our changemaking community and sign up to our newsletters for the latest updates.