On 7 June 2023, the European Union Agency for Cybersecurity (ENISA) released a report Multilayer Framework for Good Cybersecurity Practices for AI (“Framework”) in response to the evolving landscape of artificial intelligence (AI) and the associated cybersecurity challenges. The publication aims to establish a robust framework that promotes cybersecurity practices throughout the entire lifecycle of AI, ranging from conceptualization to decommissioning. This blog summarises the main features of the Framework.
AI-Related Cybersecurity Framework
The Framework is primarily intended to assist stakeholders involved in AI, including national regulators, developers, manufacturers, service providers, and professional users with securing their AI systems, operations and processes. The Framework comprises three distinct layers to fortify cybersecurity measures, each addressing specific aspects of cybersecurity practices in the realm of AI.
- Cybersecurity Foundations
Layer I focuses on basic cybersecurity measures for the information and communication technology (ICT) infrastructure hosting the AI systems. ENISA emphasises that this has to be done in line with all relevant EU legislation, including the General Data Protection Regulation, Network and Information Security Directive (NIS2), and Cybersecurity Resilience Act (CSA).
Companies utilizing AI systems should implement a robust two-stage security management process, involving risk analysis and risk management. This process should be dynamic to reflect the evolving nature of AI.
ENISA highlights the effectiveness of cybersecurity certifications, such as ISO/IEC 15408 for ICT security assessment or ISO/IEC 18045 for Common Criteria evaluation, in ensuring compliance. These standards have been implemented in methodologies like the ENISA Sectoral Cybersecurity Assessment, which is used to evaluate ICT products. - AI-Specific Cybersecurity
Layer II focuses on the AI-specific requirements associated with securing AI components throughout their lifecycle, regardless of the industry sector.
This section highlights the importance of legislation uniquely designed to regulate AI. The main focus is on the proposed EU AI Act, which entered the final negotiation stage on 14 June 2023, and the draft AI Liability Directive which serves to establish non-contractual civil liability for damage caused by AI systems.
ENISA stresses the need for ongoing AI threat assessments throughout the entire lifecycle, encompassing technical, physical, and AI-specific threats, including loss of bias, transparency and interpretability. It is important to note that AI-related risks extend to social threats, which need to be considered. Whilst conducting the assessments, AI stakeholders should bear in mind ethical considerations such as transparency, fairness, accuracy, explainability and accountability.
In addition to the AI-specific threat assessment, stakeholders should put in place security control and testing mechanisms to ensure that systems are technically robust at every stage; from design to deployment.
Finally, ISO/IEC and other organisations are working on comprehensive AI-related cybersecurity standards as current certifications cover only specific aspects of AI. - Sector-Specific Cybersecurity for AI
Layer III gives additional recommendations for specific sectors: Energy, Automotive, Telecommunications and Health. ENISA focuses on these sectors as they already have relevant cybersecurity guidelines in place.
What’s next?
The release of the ENISA Framework signifies an important step towards establishing cybersecurity practices for the entire lifecycle of AI. While AI-specific standards are still being developed, ENISA recommends that companies treat AI cybersecurity separately rather than as part of their existing practices for securing their ICT infrastructure. Moving forward, stakeholders in sectors such as Energy, Automotive, Telecommunications, and Health should pay particular attention to sector-specific cybersecurity recommendations to enhance the security of AI systems.
The Framework provides a useful basis for providers of AI systems in conforming with the obligations placed on them under the EU AI Act. In particular, the requirements for providers to have a continuous iterative risk management system that identifies, estimates, and evaluates known and foreseeable risks, and to adopt suitable measures to manage such risks. The Framework will give providers a head start in complying with the EU AI Act, ahead of its expected agreement later this summer. Once published, there will be a period of 24 months before it becomes effective (estimated Q3/4 2025).