AI Act – Artificial Intelligence Provider: When You Are, and When You Become One

AI brain

Authors: Luca Ceselli, Laura Senatore, Lorenzo Covello

 

Abstract

Understanding when a party qualifies as a provider under the AI Act is not straightforward. The line between those who develop, distribute, and integrate AI solutions can be blurry, especially in the case of white-label offerings and when GPAI models are reused or modified. This article analyses the qualification criteria and the new responsibilities of “downstream” actors in the value chain.

 

Context information

In the lexicon of the European Regulation on Artificial Intelligence (AI Act),[1] “provider” is not a merely formal label. It is the key to determining who is subject to specific obligations, ranging from mere transparency in the case of AI interacting with humans or generating and altering multimedia content, to much more stringent requirements relating to design, documentation, risk governance, and post-market oversight for high-risk systems. Understanding whether and when one is a provider is not an academic exercise: what is primarily a formal qualification has a concrete impact on budgets, project schedules, the allocation of contractual responsibilities, and ultimately, the very possibility of bringing a system into production, as well as on the AI system’s commercialization strategy. Ambiguity arises when a company uses third-party technologies – general-purpose AI models or off-the-shelf systems – and considers placing its own name or trademark on the final product or service. A common example is that of white label chatbots.[2] The question is as real as ever: am I “just” distributing/using someone else’s technology, or am I assuming the role of provider with all the associated responsibilities? Another aspect that may require further clarification when integrating a general-purpose AI (“GPAI”) model into a system is understanding who assumes the role of provider and of what type of AI system.

 

Application scenarios, main problems, practical solutions

The legal definition of a provider under the AI Act is found in Article 3(3).[3] In short, a provider is someone who develops a system or model (including GPAI) or has that system or model developed and then places it on the market or puts it into service under their own name or trademark, whether for a fee or free of charge.

The legislator chose a “functional” attribution: criteria such as technical or economic control and external branding take precedence over the simple authorship of the code. This leads to four operational messages in the context of offering technology to business customers.

First, commissioning a third-party with the development of AI solutions does not eliminate the qualification of provider: if the system is marketed under a company’s name, that company is generally the provider under the AI Act. Second, free provision does not preclude qualification, since the Regulation clarifies that even making it available “free of charge” can constitute placing it on the market. Third, the branding issue is crucial in white-label offerings. If a company takes a third-party system and releases it under its own name or with its own trademark, the Regulation would tend to attribute the qualification of provider to that company, even if the underlying model is developed by others. Fourth, it is important to remember that this kind of analysis concerns the individual AI system and not necessarily the entire tool or product in which it is integrated; a software program can incorporate multiple AI systems, each with its own provider under the AI Act.

In assessing whether a party qualifies as a provider under the AI Act, Article 25 plays a crucial role,[4] regulating role changes along the value chain (“AI Supply Chain”) for high-risk systems. The provision identifies cases in which an operator which did not develop the system – such as an importer, distributor, or deployer – can still assume the qualification of a provider and the associated responsibilities.

Article 25 photographs three typical situations.

The first situation is rebranding: when a distributor, importer, or deployer places its name or brand on a high-risk system already in service, it becomes the new provider of that system.

The second situation is the “substantial modification”:[5] if a subject that is downstream of the provider in the value chain (e.g., the deployer) significantly changes the behaviour, performance, security or context of use (as in the case of a modification to the software architecture or replacement of the operating system) of a high-risk system, they will qualify as provider with a “regulatory succession” mechanism and subsequent duties of cooperation between the old and new supplier to ensure traceability.[6]

The third situation concerns the change of intended purpose:[7] anyone who uses an AI system (even GPAI) in a context or for purposes other than those envisaged by the provider, and such context or purposes make the AI fall into the category of high-risk systems pursuant to Article 6, then the entity becomes a provider in relation to that specific use case.

In parallel, the regulation of general-purpose AI systems (“GPAI”) introduces a further layer of complexity in the assessments of the value chain. When assessing whether it is possible to qualify an entity as provider, not only the provisions relating to high-risk systems must be considered, but also those relating to general-purpose AI models need to be taken into account. In this regard, it should be noted that Article 3(63) also introduces the concept of “downstream provider”, defined as the entity that places on the market or puts into service an AI system – including a general-purpose AI system – that incorporates an AI model, regardless of who developed that model. In other words, the downstream provider is the entity that integrates an AI model (developed internally or obtained from third parties) into its own system or product, thus assuming the typical responsibilities of a provider in relation to that system. Integration can occur either when the company itself develops and incorporates the model into its own system; or when the model is provided by another entity (e.g., a GPAI model provider) and integrated into the system under a licensing or supply agreement. The concept of a downstream provider reflects the AI Act’s approach to the AI value chain, where responsibilities are not limited within the entity that develops the model, but also extend to those who integrate, adapt, or distribute it, in a spirit of shared responsibility throughout the system’s lifecycle.

While the regulation does contain a definition of a downstream provider, it is certainly less straightforward to understand what type of system (or model) you can become a downstream provider of; namely, understanding whether and when integrating a general-purpose AI model into one’s own system makes the company a provider of a general-purpose AI system or model.

The relationship between GPAI model providers and downstream providers is one of the most complex aspects of the AI Act. The former is responsible for the compliance and documentation of the general-purpose AI model (Article 53), while the latter reuses or integrates the model to create a system intended for specific applications or sectors. In this relationship, the AI Act promotes a principle of shared responsibility and information cooperation, requiring the GPAI model provider to make available to the downstream provider the technical documentation and elements necessary to ensure that the resulting system complies with applicable requirements.[8]

Given this logic of operational and regulatory continuity between the “upstream” model provider and the “downstream” system provider, the recent European Commission Guidelines for Providers of General Purpose AI Models[9] and the European Commission’s FAQs[10] provide valuable clarification on when a party that modifies a GPAI model can be considered – in turn – a provider of the resulting AI model. These documents clarify under which circumstances a party “downstream” of the provider in the value chain can become a provider of the modified model (for example, through fine-tuning, adaptations, updates, or the integration of components that change its behaviour).

The Commission recognizes that this is a complex issue with significant economic implications, as many operators modify or specialize existing basic models. As also specified in recital 97 of the AI Act, fine-tuning or adapting a model can lead to the creation of a new model, but not every modification automatically entails a change in qualification. The Commission indicates an indicative criterion for a “downstream modifier ” to be considered the provider of a general-purpose artificial intelligence model: the computing power used for the modification or fine-tuning is greater than one-third of the training computing power of the original model.[11]

In line with recital 109, the obligations envisaged for GPAI model providers (Article 53) must, however, be understood as limited to the part actually modified. This means that the provider obligations of the downstream actor (downstream modifier) will only concern the portion of the model that has actually been modified; for example, the downstream modifier will have to integrate the pre-existing technical documentation with information about the modifications made. This aspect is crucial in modern AI architectures, which often start from a base model that is then specialized for a specific domain. If that modified model is later on redistributed or integrated into systems that extend its capabilities and scope (exceeding the training computational power of the original model by a third), the downstream actor is no longer just a user, but assumes the role of provider of the resulting model. The relevant documentation and transparency requirements, however, will be tailored to the portion actually under their control. Those who integrate the general-purpose AI model into a system without making such a modification will assume the status of downstream provider of the system (but not of the model) and must then qualify the resulting AI system based on the risk classification required by the AI Act.

What does all this mean, concretely, for legal and product teams?

First of all, the choice of branding is a regulatory decision, as well as a marketing one. When evaluating a white label chatbot, one must not only ask “who wrote the code,” but also “who presents itself as accountable for the system to users and customers”, that is, the entity placing their name and brand on the AI system. Second, it is necessary to evaluate whether the specific intended use is consistent with the purpose originally envisaged (and permitted) by the provider and whether such use places the system in one of the high-risk categories – usage for purposes that are “new” or not permitted or foreseen by the provider, may result in the transition to the role of provider. Third, if you integrate a GPAI model into your own system, you can become a provider of the resulting AI system, which must be classified according to the risk criteria of the AI Act. Fourth, if you modify a GPAI model (retraining it using at least a third of the computing power used for the original training), you can become a provider of the new portion of the GPAI model. It will therefore be necessary to precisely map what has been changed and with what effects. Documentation and accountability for the modified portion will be a key tool for demonstrating the scope of applicable obligations.

 

Conclusions

The concepts presented are straightforward in theory but require discipline in practical application. The implications extend beyond the article itself but have significant economic impacts.

Being a provider, especially in the case of high-risk AI, means managing the entire lifecycle of the system or model for the part under your responsibility: from design to post-market monitoring. Remaining a deployer may have less impact, but it is still not a cost-free scenario. For this reason, contracts with upstream suppliers (i.e., suppliers that are upstream in the AI value chain) and technology partners must clearly reflect who is the provider of what – especially when working with white label or making substantial modifications – as well as clarifying what the permitted purposes of use are (intended purposes).

At the governance level, it is advisable to introduce a regulatory filter mechanism in pre-market decisions. Before launching a new “AI by [Brand]” product, appropriate checks must be carried out with respect to branding, technical control, intended use, and the impact of fine-tuning. For high-risk systems, this test must be integrated into product compliance management and post-market surveillance, coordinating roles and information flows between upstream and downstream stakeholders, as required by Article 25.

For general-purpose AI (GPAI) models, however, when integrating a GPAI model into a proprietary system, the resulting system must be analysed and classified according to the AI Act’s risk criteria. If a GPAI model is trained using at least a third of the computing power used to train the original model, then it can be considered a provider of the new portion of the model. Therefore, organizations should maintain records of changes and of the datasets used for any fine-tuning and model adaptation activities. This way they can demonstrate whether and for which portion of the model they are considered a provider.

In all these cases, branding is not just a business decision; it can become a responsibility. For companies evaluating white-label offerings or integrations with existing models, the next step is to institutionalize a pre-launch qualification check identifying in advance who the provider is, what they offer, and what obligations need to be considered. This is, ultimately, the pragmatic way to transform the question of “are we deployers or providers?” from a latent risk into a lever for compliance and commercial reliability.

 

Operational takeaways

 

    1. Verify precisely who qualifies as a “provider.”

A provider is an entity that develops – or has developed – an AI system or GPAI model and markets it or puts it into service under their own name or brand, even free of charge. The qualification derives not only from development, but also from the technical, financial, or branding control exercised over the system or model.

 

    1. For high-risk AI systems, pay attention to:
      • Rebranding

The application of a new trademark or trade name to a high-risk AI system already placed on the market or put into service entails, pursuant to Article 25 of the AI Act, the entity carrying out the rebranding assuming the status of provider, with the related compliance and documentation obligations.

      • Substantial Changes

A substantial modification to a high-risk AI system – for example, significant changes in behaviour, performance, security, or context of use, as in the case of significant updates to the software architecture or operating system – also results in a change of status. The entity making the modification becomes the provider of the resulting system and must ensure traceability and cooperation with the original supplier.

      • Consistency with the original purposes of use (“intended purposes”)

The use of an AI system for purposes other than those intended by the original provider may result in a change in qualification, particularly when the new use falls within the category of high-risk systems pursuant to Article 6 of the AI Act.

 

    1. Correctly identifying when you become a “downstream provider”

Integrating an AI model – even a GPAI model – within a proprietary system or product qualifies the resulting system as a downstream provider. In this case, you must:

      • classify the system according to the risk level foreseen by the AI Act;
      • check the technical documentation relating to the model (see Annex XII of the AI Act).

 

    1. Distinguish between integration and modification of a GPAI model.

Integration “as is ” of a GPAI model does not determine the qualification of provider of the model, but only of the system incorporating it, if that system is then marketed under an entity’s own name or trademark. However, if the model is modified (for example, by retraining) and the computing power used for these operations exceeds a third of that used for the original training, the entity becomes the provider of the new portion of the model.

 

    1. Accurately document changes and datasets used.

When retraining GPAI models, it is necessary to maintain records of the changes made, the datasets used, and the computational power employed. This helps limit the provider’s obligations to the actual modified part and demonstrates traceability and accountability.

 

    1. Clearly define roles and responsibilities in contracts.

Agreements between stakeholders along the AI value chain must include specific clauses to ensure compliance with the AI Act. These clauses should be drafted taking into account:

        • how the system was trained (including clauses that ensure that the training was carried out in compliance with regulations for the protection of personal data and intellectual property);
        • the roles of the operators in the value chain (establishing which party is the provider, which is the deployer and which is the distributor or importer);
        • the classification of the AI system (describing the AI system and the GPAI model on which it is based, if any, and also specifying the permitted purposes of use);
        • the moment of entry into force of the applicable provisions of the AI Act.

 

 

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 on Artificial Intelligence (AI Act), in Official Journal of the European Union L 202 of 12 July 2024, available on EUR-Lex.

[2]A white label is defined as “A product that is made by one company but sold by another company using their own name”, Oxford University Press, Oxford Advanced Learner’s Dictionary, 2025).

[3] Ibidem.

[4] Ibidem.

[5]According to Section 3(23) of the AI Act, “substantial modification” means “a change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed”.

[6]According to recital (128) of the AI Act, it does not constitute a “substantial modification” if an artificial intelligence system continues to learn or adapt automatically after being placed on the market or put into service, provided that such developments have been foreseen by the supplier and assessed during the initial compliance phase. In other words, controlled and documented self-learning falls within the scope of the original system, without triggering a new supplier qualification.

[7]According to Article 3 (12) of the AI Act, “intended purpose” means “the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation”.

[8] For further details, please refer to Annex XII of the AI Act.

[9] European Commission, Guidelines on the scope and the obligations of providers of general-purpose AI models under the AI Act, July 2025, available on Shaping Europe’s Digital Future.

[10] European Commission FAQ on General- Purpose AI Models in the AI Act Questions & Answers | Shaping Europe ‘s digital future.

[11] Ibidem, Section 3.2. and, in particular, paragraph (63).

 

ICTLC Italy
italy@ictlegalconsulting.com