On 23 December 2024, Draft Law No. 8476 was issued to implement key provisions of Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, which establishes harmonized rules on artificial intelligence (the AI Regulation). The Draft Law, currently pending approval by Luxembourg’s Parliament (Chambre des Députés), focuses on implementing the organizational and procedural aspects of the AI Regulation in preparation for the application of certain of its provisions in February 2025. The measures proposed in the draft law align with the European Union’s (EU) well-established regulatory framework. As such, its adoption presents an opportunity to gain a deeper insight into the procedural aspects of the AI Regulation.
Background
The AI Regulation is a landmark legislative effort, establishing the first comprehensive global framework to regulate the increasingly widespread use of artificial intelligence (AI). It reflects the European Union’s ambition to lead in governing delicate, unregulated areas and to steer the global economy toward sustainability, much like its efforts in the environmental, social, and governance (ESG) domain - see several contributions from our October 2024 Newsletter for more on this topic.
With the rise of so-called Big Tech companies, primarily based in the United States, and the growing market for AI-embedded products and services, the AI Regulation requires an extensive regulatory structure. This framework must effectively identify high-risk AI systems and implement appropriate remedies within the EU’s common market of twenty-seven Member States.
The organizational structure, powers, and procedures outlined in Draft Law No. 8476 are largely inspired by Regulation (EU) 2019/1020 on market surveillance and compliance of products (the Compliance Regulation). As the cornerstone of EU market surveillance, the Compliance Regulation establishes mechanisms for ensuring adherence to harmonized EU rules, defining enforcement powers, sanctions, and cooperation procedures for national authorities.
The AI Regulation does not detail all procedural aspects of its enforcement, instead delegating to Member States the responsibility of identifying the competent authorities in accordance with the principles set out in the Compliance Regulation. Draft law No. 8476 thus helps bridging this regulatory gap within Luxembourg's national framework.
Notifying and Conformity Assessment Bodies
As with other sectors governed by EU regulations, market entry for AI products and services relies primarily on conformity self-assessment by national producers and service providers. AI providers deploying non-high-risk AI systems, such as chatbots or AI-based recommendation engines, must ensure compliance with the transparency and ethical principles outlined in the AI Regulation. However, these providers are not required to obtain prior approval before introducing their products to the market. Each provider must determine whether their AI system falls under the “high-risk” category using the criteria defined in annex III of the AI Regulation.
Conversely, providers of high-risk AI systems are not only under an obligation of self-assessment but need to make sure that the authorities of the member state where they are established are formally notified about their deployment. For this purpose, under the AI Regulation certain national authorities, designated as “notifying authorities”, have the role of informing the European Commission and other national authorities of the notifications received. Draft Law No. 8476 clarifies that in Luxembourg such notifying authorities essentially are (Article 2 thereof):
- The Office luxembourgeois d’accréditation et de surveillance (OLAS);
- The Agence luxembourgeoise des médicaments et produits de santé (ALMPS) with respect to high-risk AI systems applied to medical devices and their accessories, and diagnostic medical devices; and
- The Commissariat du gouvernement à la protection des données auprès de l’État (CGPD) with respect to AI systems potentially affecting personal data as needed in procedures managed by the state and its ministries and bodies.
Given the risk of regulatory capture vis-à-vis the need for an unbiased application of the AI Regulation, Draft Law No. 8476 expressly establishes that notifying authorities need to exercise their powers independently, impartially and without bias (Article 5 thereof, elaborating on Article 31 (6) of the AI Regulation). As bodies entrusted of functions of general interest, notifying authorities are expected to exert a crucial role in ensuring respect for fundamental rights.
It is worth noting that the assessment of high-risk AI systems is not immediately carried out by the notifying authorities, but by conformity assessment bodies (CABs). These are independent organizations designated by notifying authorities themselves to assess compliance of AI systems classified as high-risk with the rules of the AI Regulation, based on standards, documentation, testing, and audits and respect for safety, transparency and human oversight requirements. As CABs ultimately prepare the relevant notifications for notifying authorities, they are regulated entities subject to the requirements of the AI Regulation (among which impartiality) and remain under the surveillance of the notifying authorities.
Due to the sensitive nature of the data involved, the case of high-risk AI systems for the use of law enforcement, immigration or asylum authorities or EU institutions or bodies, the assessment normally carried out by CABs is afforded to the Commission nationale pour la protection des données (“CNPD”) (Article 6 of Draft Law No. 8476).
Surveillance Authorities
Surveillance authorities have tasks of wider scope than notifying authorities, encompassing the oversight of compliance with the AI Regulation by all market operators. Draft Law No. 8476 adopts a competence-based approach to identify surveillance authorities in Luxembourg. It expands the tasks of existing authorities and bodies to include oversight of all relevant stakeholders of AI systems (e.g., suppliers, distributors, deployers, operators). Consistently, the following entities are identified as surveillance authorities, insofar as AI systems are placed on the market, put into service or used by entities subject to their supervision (Article 7 of Draft Law No. 8476):
- the Commission nationale pour la protection des données (CNPD);
- the Autorité de contrôle judiciaire;
- the CSSF and the Commissariat aux assurances (CAA);
- the Institut luxembourgeois de la normalisation, de l’accréditation, de la sécurité et qualité des produits et services (ILNAS);
- the Institut luxembourgeois de régulation (ILR);
- the Agence luxembourgeoise des médicaments et produits de santé (ALMPS); and
- the Autorité luxembourgeoise indépendante de l’audiovisuel (ALIA).
Among these, the CNPD is designated as the horizontal market surveillance authority by default, which is easily explained as a large amount of data processed by AI systems are personal data and most of AI practices covered by the AI Regulation involve the use of personal data. It is also worth noting that, in conformity with the rules of the banking union, the CSSF is urged to communicate to the European Central Bank any information on AI systems, as identified in the course of its market surveillance activities, which could be of potential interest for the prudential supervision tasks thereof.
In an effort of regulatory completeness, Draft law No. 8476 defines the list of missions of surveillance authorities by expressly referring to the list of missions contemplated under the Compliance Regulation (Article 8 thereof). With respect to AI-embedding products or services marketed in the EU common market, these can be summarised as (i) oversight of market operators, (ii) adoption of appropriate and proportionate corrective actions in case of non-compliance with the AI Regulation, and (iii) proportionate and adequate sanctioning, where required.
Furthermore, Draft Law No. 8476 also refers to the list of powers considered under the Compliance Regulation (Article 9 thereof), the most significant of which are the following:
- obtaining documents and, in general, information of any kind on AI systems, as relevant for the enquiry, from market operators;
- launching enquiries and starting investigations on market operators;
- carrying out inspections and dawn raids as well as physical checks of products;
- obtaining access to premises, land, means of transporting, etc.;
- requiring market operators to take appropriate actions to end non-compliance or eliminate risks;
- taking measures in case of failure to take corrective actions or eliminate risks, including restricting the availability of products on the market, or ordering to withdraw or recall them;
- sanctioning market operators; and
- acquiring product samples, including under a cover identity, to inspect them.
Similarly to notifying authorities, surveillance authorities also need to exercise their powers independently, impartially and without bias (Article 11 thereof, elaborating on Article 31 (6) of the AI Regulation). Thus, by ensuring compliance with the AI Regulation, surveillance authorities play a critical role in fostering trust in AI systems while safeguarding public interests.
Cooperation among EU National Authorities
Draft law No. 8476 also contain rules aimed at clarifying the provisions of the AI Regulation relating to cooperation among national authorities (both notifying and surveillance), which is crucial to ensure a uniform application of the regulation across EU member states, thereby enhancing the fairness and predictability of its enforcement. In this framework, the CNPD is designated as the single contact point (in accordance with Article 70(2) of the AI Regulation).
In this framework, in conformity with the principle of loyal cooperation (Article 4(3) of the Treaty on the Functioning of the European Union), national authorities need to coordinate and cooperate where required for the application of the AI Regulation. In line with these objectives, in AI regulation and other regulated sectors, national authorities may establish formal cooperation agreements to enhance information sharing and coordination.
Under the AI Regulation, oversight decisions taken by a national authority to correct cases of incompliance potentially exceeding the member state’s territory are notified to the European Commission and fellow surveillance authorities. In case of non-compliance with such corrective decisions, national authorities may prohibit or restrict the marketing of incompliant AI systems (Article 79 of the AI Regulation). Where even such restricting measures remain unobserved, the European Commission may step in to steer a Union safeguard procedure involving national authorities and operators, which could terminate with a decision effective towards these as well as for the whole EU common market (article 81 of the AI Regulation).
In accordance with Article 14 of Draft law No. 8476, professional secrecy – although protected under various Luxembourg sectoral regulations – must not obstruct cooperation and the exchange of information when necessary to ensure effective market surveillance and enforcement of the AI Regulation.
The Way forward
Draft law No. 8476 significantly aligns Luxembourg’s regulatory landscape with the EU’s ambitious efforts to establish an AI harmonized framework aimed at ensuring that AI systems entering the market adhere to transparency, ethical, and safety requirements. From this standpoint, both the establishment of a competence-based approach to surveillance and designation of the CNPD as the single contact point reflect the critical role of expertise and data protection in AI governance, underscoring the need for robust oversight mechanisms, especially for high-risk AI applications.
As in the case of other EU regulated areas, effective cross-border coordination, information sharing and reacting against incompliant operators will be essential to prevent regulatory gaps and inconsistencies potentially undermining the overarching objectives of the AI Regulation. Achieving the right balance between innovation and regulatory oversight will be crucial in ensuring that AI technologies contribute positively to society while mitigating potential risks. This is particularly significant considering that AI systems are massively driven by non-European based industry.
With respect to this, the reliance on CABs for high-risk AI systems, though not unknown in other EU common market areas, also introduces potential challenges, including the risk of regulatory fragmentation and forum shopping across EU Member States. Ensuring that CABs operate independently, impartially, and in close coordination with notifying authorities will be one of the crucial points to maintaining public trust and ensuring consistent enforcement across jurisdictions.
Share on