Navigating the legal terrain of AI: a guide for in-house legal teams

AI is transforming industries at a breakneck speed. At the same time, it is a challenge for in-house legal teams in the EU to keep up, or to even decide what to keep up with. When and how should legal teams be involved in the AI initiatives of their businesses? This guide is written for in-house teams with such questions.

By Bas Dijkmans van Gunst

Expertise: IT Law

29.09.2025

Navigating the Legal Terrain of AI: A Guide for in-house Legal Teams

Artificial intelligence (AI) is transforming industries at a breakneck speed. At the same time, it is a challenge for in-house legal teams in the EU to keep up, or to even decide what to keep up with. When and how should legal teams be involved in the AI initiatives of their businesses?

This guide is written for in-house legal teams with such questions. This guide should assist them to take an appropriate and realistic role in the AI transformation. The focus: legal teams enabling the business to stay within the legal/ethical boundaries, without those same legal teams being overwhelmed or being seen as a roadblock. This guide should also help with developing the AI company strategy and setting up an AI compliance program.

This guide starts with a summary of the requirements of the AI Act and other legal requirements which may be applicable to the AI systems within your company (Part I). After that, we describe our take on what in-house legal teams can do to assist their businesses with AI compliance (Part II).

I. UNDERSTANDING THE LEGAL FRAMEWORK AROUND AI

1. The AI Act

The AI Act is mainly a product regulation. As you can read in this summary, most AI Act requirements are aimed at high-risk or systemic-risk AI systems. If an AI system in your company falls within those categories, there is much to do, even if you are ‘just’ deploying such system (‘deploying’ is the term for setting up an AI system for operation and use). The extensive AI Act requirements include items such as all-embracing risk assessments, continuous testing, data quality and governance, accuracy, robustness and cybersecurity.

AI systems considered medium or low risk under the AI Act, such as your chatbot or content generator and including your General Purpose AI (GPAI), will probably require transparency or integration of a digital ‘content generated by AI’ marking.

Tiers of the act entered into force in 2025, and its main obligations (for high-risk AI systems) will start to apply in 2026. A detailed timeline is provided below.

The following is an in-depth summary of the main requirements of the AI Act. Note that we left out items like the definition of AI and the territorial scope of the AI Act, just assume you fall under it for now. A critical step in compliance, though, is determining your organization’s role for each AI system: are you a provider or a deployer? In short, the provider is the creator and the deployer is the user or operator of an AI system.

1.1. Prohibited AI practices
The AI Act prohibits a number of specific AI systems in the EU market. These systems are by default deemed harmful in relation to safety, society or EU fundamental rights or values. This includes systems such as:
1.2. High-risk AI systems
For high-risk AI systems, the AI Act requires conformity assessments, preceded by extensive requirements, for EU market access. When conformity is confirmed, the AI system is equipped with a CE marking.

1.2.1. Which AI systems are deemed high-risk?
The high-risk qualification follows from the impact of these AI systems on health, safety, or fundamental rights. The current list of high-risk AI systems, which can be updated by the EU Commission where necessary, includes:

a) AI technologies integrated into products already regulated due to the possible risks they encompass such as medical devices, machinery, vehicles (including cars), toys, lifts and aircrafts.

b) AI systems that could potentially harm public interest, including safety, fundamental rights, democracy, and the rule of law. This category includes:
According to a filter provision added in the final version of the AI Act, category b) AI systems are not considered high-risk if they do not pose a significant risk, for instance because a decision-making functionality is not materially influenced by the AI system. This includes AI systems used for narrow procedural tasks, AI systems used to improve the result of a previously completed human activity and AI systems used to detect decision-making patterns or deviations and not meant to replace a human assessment. Category b) AI systems used for profiling of natural persons will always be considered high-risk. Providers relying on this filter will be required to document and demonstrate their assessment.

1.2.2. Providers should ensure compliant technology
To pass the required conformity assessment, high-risk AI systems should comply with a range of extensive requirements including:
It is the provider of the high-risk AI system who is required to ensure that the AI system for which it is considered provider complies with these requirements and to set up an extensive quality management program. In short, the AI Act defines it as a party that develops an AI system or has it developed under its name. Modifying an existing AI system into high-risk, including GPAI, makes you a provider. A deployer is an organization deploying AI in its processes in a professional manner.

The distinction is important since the AI Act obligations depend on your role. The provider carries most of the compliance requirements (especially for high-risk AI), while the deployer has a narrower set of duties. Note that you can be both a provider and deployer for the same system – for example, a firm that developed an AI tool in-house and then uses it internally is both. The obligations of each role still apply in their respective scopes though.

1.2.3. Deployers should ensure compliant use of technology
When deploying high-risk AI systems, maintaining compliance involves:
1.3. Medium / Low risk AI Systems - Human Interaction and Content Generation
In cases where AI systems interact with humans or generate content (e.g., text, images or deepfakes), providers have an obligation to ensure people are informed that they are interacting with AI, and content is digitally marked as AI generated.

Deployers should inform people that they are exposed to an emotion recognition/biometric categorization system. Deployers of deepfakes should disclose AI manipulation, with limitations applying to e.g., AI systems generating art or used in law enforcement.

1.4. General-Purpose AI Models
1.4.1. What are they?
AI models are part of AI systems, which also include the infrastructure and processes required to support the AI operations and run the models. GPAI models are designed to perform a wide range of tasks across various domains, rather than being tailored for a specific function. These models are usually trained self-supervised, unsupervised or with reinforcement learning and based on substantial amounts of data. Its most famous example is the large language model, which is the basis of ChatGPT, but GPAI models are also included in image recognition software and general machine learning frameworks.

1.4.2. General requirements for providers of GPAI models
Providers of GPAI models should keep technical documentation on testing and evaluation of their GPAI model and inform deployers of the capabilities and limitations of the GPAI model. Free and opensource GPAI models are exempted from these obligations.

Providers of GPAI models should also implement a policy to comply with EU copyright law and have a public summary available on the content used for training the GPAI systems.

1.4.3. GPAI models with systemic risks – additional requirements for providers
Additional requirements apply to providers of powerful GPAI models which are considered to encompass systemic risks. Such systemic risks can originate from the high impact capabilities of the GPAI model, the amount of computation required, the amount or nature of the training data or the considerable number of registered users. This is something which would apply to e.g., ChatGPT or Gemini. Such GPAI models are deemed to have actual or reasonably foreseeable negative effects on e.g. democratic processes, public and economic security and the dissemination of illegal, false, or discriminatory content.

Under the AI Act, providers of such GPAI models have extensive requirements in the field of notifying the EU Commission, testing and evaluating their models, and assessing and mitigating risks.

1.5. Implementation timeline
The AI Act was published last year, but the obligations are rolled out gradually. Key dates are summarized below:
Note that due to geopolitical developments that this timeline may change; new proposals to revise it are introduced periodically.

1.6. Penalties
The AI Act mandates that EU Member States impose effective, proportionate, and dissuasive penalties for violations. Regulators will ultimately decide exact fines case-by-case, considering factors like the nature and gravity of the infringement and any mitigating or aggravating circumstances. However, the regulation itself sets out maximum fines in tiers. The key penalty tiers are:
Member States were required to establish their national penalty regimes in line with these maxima and ensure enforcement authorities can act by 2 august 2025.

1.7. AI Act provisions supporting innovation
Other than the protection of e.g., humans, fundamental rights and democracy through product regulation, the purpose of the AI Act is to support and protect innovation. This is shown through a number of rules around facilitating testing in regulatory sandboxes or real-world settings.

More importantly, SMEs and start-ups have priority with sandbox testing and are exempted from a number of (administrative) requirements, such as the technical documentation requirement for high-risk systems. Members States are required to provide SMEs and start-ups with support, including access to training, awareness and other guidance on the application and implementation of the AI Act.

2. Additional regulations, laws, or legal issues around AI

2.1. Falling outside of, or only partly within, the AI Act
As you have read, the largest part and most comprehensive requirements of the AI Act are aimed at high-risk AI systems or systemic-risk GPAI models. Given this somewhat limited scope, you may very well only have some transparency obligations under the AI Act or obligations to mark content as AI generated.

The above does not mean that AI systems falling outside of the AI Act, or only in the medium/low risk part, are off the hook legal-wise. Probably not. Part II explains this further.

2.2. General Data Protection Regulation (GDPR)
As said, the AI Act is mainly a product regulating act. While promoting ‘human-centric AI’ is one of the purposes of the AI Act, its rules only indirectly protect individuals from AI harm. Individual redress is not regulated through the AI Act, as opposed to the GDPR.

Naturally, the GDPR will always apply when your AI systems process personal data. Compliance with the GDPR in scope of AI systems involves continuous safeguarding principles like purpose limitation, data minimization, accuracy, and transparency. This can become a challenge when working with AI and may in any case require your organization to perform a DPIA, either as part of your FRIA or as a standalone assessment.

The AI Act strengthens the rights of individuals in relation to automated-decision-making (now only a short provision in the GDPR) if such technique is used in the field of high-risk operations, such as education, border control, essential services and law enforcement. In such cases, the extensive requirements (see Part I under 1.2.2. and 1.2.3) for high-risk AI systems apply to providers and deployers.

2.3. Data Act
The EU Data Act focuses, amongst others, on data generated by Internet of Things (IoT) devices – aiming to stimulate innovation and competitiveness through fair data sharing.

Users of IoT devices have a claim against the data holder (typically the provider of the IoT device) for access and use of data generated using that IoT device. They can also request the data holder to transfer the data to a third party. When an IoT device is equipped with an AI system, the user's right to data provision also includes the data generated by that AI system. This may allow for the use, porting and e.g., training of (new) AI systems based on the data generated by such IoT devices.

The Data Act was published in January 2024 and generally applies since 12 September 2025. Limited parts of the legislation will start applying from a later date. See for example Article 3(1) and 50 which cover, in short, that mandatory changes to the design, manufacture, and provision of IoT products and related services will apply only to such products and services placed on the market after 12 September 2026.

2.4. Digital Services Act (DSA)
You can read more on the DSA in our Navigating the DSA guide. The DSA requires online platforms to be transparent about, and ensure fair operation of, AI systems used by those online platforms to deliver their services or to comply with the DSA. This includes recommender systems, content moderation and complaint management operating on AI.

2.5. Cyber Resilience Act (CRA)
The CRA is a new EU regulation focusing on cybersecurity of products with digital elements, bringing about specific cybersecurity requirements. It also specifically includes standalone software, but open-source software is exempt when developed or supplied outside a commercial context.

Since AI systems are also delivered as software or IoT devices, this regulation will often apply combined with the AI Act. It was published in December 2024, and after a transition period, the main obligations will apply from 11 December 2027 onwards.

2.6. Revised Product Liability Directive (PLD)
The EU agreed on a new PLD to replace the 1985 rules. It has entered into force late 2024, but Member States have until 9 December 2026 to transpose it into national law. It will apply to products placed on the market from that date onward, while ‘older’ products remain subject to the 1985 directive. This new PLD expands the definition of ‘product’ to explicitly include software and AI systems. As a result, if an AI system (or software generally) causes damage, the strict liability regime can apply just as it would to another defective hardware product. Thus, accountability for AI has become a very real possibility. For high-risk AI, AI Act compliance will not only avoid regulatory fines, but also better position the organization for this situation.

Besides this revision, an AI Liability Directive (AILD) was proposed with aims to facilitate civil claims for AI-caused harm by introducing presumptions of causality and disclosure obligations. However, we note that the European Commission withdrew this proposal in early 2025.

2.7. Sector-specific Regulations
Different industries have their own regulatory frameworks that may impact AI systems. For example, the financial sector has strict regulatory obligations to prevent fraud and protect consumers and to demonstrate their compliance with these regulations. This must be considered when deploying AI systems in this field, where e.g., explainability will be an item to consider.

2.8. General legal considerations
Even when the legislation described above does not apply, your organization must still consider general legal principles, e.g., in the form of:
2.9. Fundamental Rights
All parties in the AI value chain, from the provider to the end-user, should consider the broader implications of their AI systems on fundamental rights, even if the nature of the AI system is such that it is does not require a FRIA under the AI Act. People still do not like to be discriminated against, or to be subject of a creepy bias.

2.10. Ethics, Societal Impact, Geopolitics, etc.
Even if an AI system does not have any direct consequences for individual rights, there may still be ethical consequences or consequences for broader public interests to consider.

What are the consequences of workplace AI for the availability of jobs? What are the consequences of AI in art for human creativity? What are the consequences of the lack of culturally diverse source data used to train the ChatGPT and other large language models we currently work with? What monopolies will arise if the battle for AI talent is won by big tech with deep pockets?

We can go on and one with this list and have no answer. For many, these items are no concrete issue, but when ignored they will become a concrete issue for many.

Meaningful and independent checks & balances are key. A FRIA, or responsible or ethical AI impact assessment can guide you through (part of) these questions and, depending on the nature of your business and its risk appetite, you can include more questions in your template AI impact assessment questionnaire which address these issues.

II. THE TASKS OF THE LEGAL TEAM
Now that you have a flavor of the legal and ethical framework around AI, you can get to work. The step-by-step plan below contains our view on how to best manage this.

Before you start, keep in mind that it is not the legal team’s job to make sure that the business is compliant, that is the job of the business. The job of the legal team is to ensure that the business can be compliant.

5-step-plan to allow the business to be compliant with the AI legal framework

Step 1: Talk to the board
Provide your board with a comprehensive overview of the legal framework around AI. Where relevant, indicate the broader issues, e.g., on ethics and society. Talking with the board works both ways:
Step 2: Provide input to the company’s AI strategy, initiate if not there already
Your knowledge on legal implications and potential risks can be integrated into your organization’s overall AI strategy. Initiate an AI strategy if there is not one already.

Step 3: Join your company’s AI governance board, initiate if not there already
Having an AI governance board with diverse stakeholders is important to ensure appropriate checks & balances in your company’s development and deployment of AI.

Other members of this board will, depending on the size and technological scale of your company, include representatives of e.g., IT, architecture, data, business, operations, risk and security.

Step 4: Talk to the business and gain common understanding
To effectively advise the business on AI compliance and equip them with efficient tools & training, you also need a comprehensive understanding of their current AI use cases and future ambitions. Initiate discussions with key stakeholders in each department to understand the specific applications of AI, the data sources they rely on, and the objectives they aim to achieve through AI.

This includes meeting with product development teams to learn about AI-driven innovations, speaking with marketing and sales to understand customer data analysis, and collaborating with operations to see how AI optimizes workflows and processes.

Since many of the legal obligations will depend on the classification of your organization’s AI systems, it is crucial to gain a good overview. A good start would be collecting the answers to the following questions:



You can use similar types of questionnaires for other legal areas, such as privacy and security. The oversight you have gained over the AI use in your organization will allow you to effectively address any legal and contractual obligations. For example, finding out your organization mainly uses predictive AI means a governance focus on accuracy and fairness, while for generative AI that would entail a focus on content and behavior.

Step 5: Equip the business
After gathering insights and support from steps 1-4, you are ready to design the AI compliance program. Note this is not solely a legal program, especially not if the AI Act is applicable to AI systems in your company. Get the right expertise onboard. An AI compliance program will be a joint effort with experts on e.g., project management, data science, quality management, information technology, risk management and, if you are in luck and this is a separate role, ethics. You may consider using AI to assist your company with its AI compliance program.

Here are the subjects the legal/ethical part of your AI compliance program will need to tackle:

1. AI Governance

2. Create policies, procedures, and templates

3. Raise awareness

4. Provide tools
The journey to effectively manage AI’s legal challenges is continuous and dynamic. It is not just about ensuring compliance but enabling the business to work within the legal boundaries. By understanding the legal framework, engaging with the board and business, and equipping the organization with the right tools and knowledge, you can be in control of its legal and ethical AI risks. Happy to help, do not hesitate to contact us for advice and support.

Bas Dijkmans van Gunst

IT Law

Roan de Jong

Data & Digital Services

Become part of our changemaking community and sign up to our newsletters for the latest updates.