21 Mar Regulatory Sandboxes for AI and Cybersecurity: Bridging the Gap Between Innovation and Compliance
Authors: Davide Baldini, Jacopo Dirutigliano, Laura Senatore, Lorenzo Covello
Background Information
Regulatory sandboxes allow companies to test innovative technological products within a controlled real-world environment under a specific framework developed and monitored by one or more competent authorities, for a limited period of time.[1] As a flexible and dynamic regulatory tool, sandboxes are particularly well-suited for governing disruptive technologies, such as artificial intelligence (AI), where traditional regulatory approaches may struggle to keep pace with rapid innovation.
By offering a safe, supervised space for experimentation, regulatory sandboxes aim at fostering a proactive and collaborative relationship between regulators and businesses.
Regulatory Sandboxes under the AI Act and the Cyber-Resilience Act
Regulatory sandboxes are foreseen by the EU’s Artificial Intelligence Act (AI Act) and by the Cyber Resilience Act (CRA).
Under the AI Act, each Member State is required to establish at least one sandbox by 2 August 2026. They may also set up sandboxes at the regional[2] and local levels, as well as joint sandboxes co-managed with other Member States.
These sandboxes allow AI providers to develop, test, and validate innovative AI systems under the supervision of AI market surveillance authorities. Authorities are required to provide guidance to attendants about regulatory expectations and how to fulfill the requirements set out in the AI Act. This may include testing the AI system under real-world conditions. Furthermore, when the AI system involves the processing of personal data, national data protection authorities must participate in the sandbox, with a view to supervise and offer guidance on relevant GDPR obligations, which apply together with the AI Act.[3]
The CRA, which will apply from December 2027, also allows the creation of sandboxes at the national level, to test innovative products with digital elements in order to facilitate their development, design, validation, and testing in light of cybersecurity requirements.
Regulatory sandboxes may thus serve as a collaborative tool between regulators and companies, bridging the gap between innovation, AI governance and cybersecurity requirements,[4] requiring regulators to assist companies in creating new products which are compliant by design.
Advantages for Businesses and Regulators
Once the sandbox process has been successfully concluded, regulators issue an exit report, which may be leveraged by companies to showcase their compliance with relevant regulations, including to successfully pass the conformity assessments. Hence, companies involved in the sandbox can both accelerate market access and achieve compliance-by-design, thus obtaining a competitive edge on competitors.
This is especially valuable in light of the multiple layers of EU digital regulation that may apply to the development and marketing of innovative products (e.g., AI Act, GDPR, CRA, may apply cumulatively in some instances), which often leaves companies with numerous compliance challenges. In this respect:
-
-
-
- For startups and SMEs, participation is particularly valuable, as it allows them to implement compliance requirements early, ensuring their products are compliant by design with the AI Act, CRA and GDPR. This not only reduces regulatory uncertainty but also enhances market readiness and increases the value of the solution in the eyes of potential investors and partners.
- For larger corporations, early engagement with regulators helps streamline complex compliance processes, preventing costly redesigns and ensuring alignment with evolving EU standards.
- For investors, an innovative product that has successfully undergone sandbox attendance – as documented within the exit report – signals strong regulatory posture, lower compliance risks, and a higher likelihood of long-term success.
-
-
At the same time, authorities benefit from “regulatory learning”, gaining valuable insights into how new technological applications work, before they are widely deployed by the public. This enables authorities to better anticipate risks, refine their supervisory practices, and ensure that the regulatory framework remains adaptive and responsive to technological developments – rather than reacting after technologies are already in use.[5]
However, attending and successfully passing a sandbox is not always an easy task. In most cases, expert guidance is essential to fully leverage the sandbox’s benefits.
Possible Challenges for Businesses
Companies may not always be able to identify the best sandbox to attend, especially when multiple EU regulations apply. Furthermore, in light of the relevant organization’s size and market objectives, local sandboxes may be preferable over national ones, and vice-versa. In addition, the number of applications that may be admitted to sandboxes is limited, therefore the sandbox proposal must be carefully drafted and presented.
Thus, maximizing the opportunities offered by a regulatory sandbox requires careful planning and strategic resource allocation. Companies must present a well-structured sandbox plan and maintain ongoing coordination with regulatory authorities throughout the process, which can be challenging.
As a result, to fully leverage the potential of sandboxes and avoid possible drawbacks, businesses benefit greatly from specialized guidance and advisory support, ensuring they meet expectations, streamline the process, and maximize the strategic advantages that sandboxes offer.
Conclusion
Regulatory sandboxes under the AI Act and Cyber Resilience Act represent a strategic opportunity to develop innovative products and services in a controlled environment.
For companies, they provide structured regulatory support, allowing participants to test, validate, and refine their AI technologies while addressing legal requirements early in the process. Participation can accelerate market entry, reduce legal uncertainty, and build trust with regulators and customers alike. However, success requires careful preparation: companies must clearly define testing objectives, select the most appropriate sandbox, and engage proactively with authorities.
For regulators, sandboxes offer valuable insights into emerging technologies, promoting evidence-based governance.
[1] European Commission, “Better regulation toolbox”, July 2023, p. 599 ss., available at https://commission.europa.eu/law/law-making-process/better-regulation/better-regulation-guidelines-and-toolbox_en.
[2] In Italy, Tuscany is the first region to have formally approved a local regulatory sandbox on AI and cybersecurity, with art. 25 of Regional Law 9 December 2024, n. 57, on Digital Innovation and Digital Citizenship Rights, available at: https://raccoltanormativa.consiglio.regione.toscana.it/articolo?urndoc=urn:nir:regione.toscana:legge:2024-12-09;57&dl_t=text/xml&dl_a=y&dl_id=&pr=idx,0;artic,0;articparziale,1&anc=art25.
[3] Davide Baldini and Kate Francis, “AI Regulatory Sandboxes between the AI Act and the GDPR: the role of Data Protection as a Corporate Social Responsibility”, ITASEC 2024: The Italian Conference on CyberSecurity, available at: https://ceur-ws.org/Vol-3731/paper07.pdf.
[4] The Italian Cybersecurity National Lab has recently published a white paper on sandboxes, analyzing legal and practical issues and providing recommendations to authorities: Bagni F. and Seferi F. (eds.) (2025), Regulatory sandboxes for AI and Cybersecurity. Questions and answers for stakeholders. CINI’s Cybersecurity National Lab. ISBN: 9788894137378. Available here: https://cybersecnatlab.it/white-paper-on-regulatory-sandboxes/.
[5] Enza Cirone “Gli spazi di sperimentazione normativa nell’Unione europea: regolamentare l’innovazione tra principi e prassi applicative”, in Rivista Italiana di Informatica e Diritto, 2025, vol. 7(1), available at https://doi.org/10.32091/RIID0211.