On 26 November 2023, the US Cybersecurity and Infrastructure Security Agency (CISA), together with the UK’s National Cyber Security Centre (NCSC), published joint ‘Guidelines for Secure AI System Development’ (the Guidelines).
The Guidelines were formulated by CISA and the NCSC, in cooperation with 21 other international agencies and ministries, as well as industry experts.
These Guidelines aim to ensure that developers integrate cybersecurity into the development process from the outset and throughout, deploying what is known as a ‘secure by design’ approach.
The Guidelines are separated into four phases within the AI system development lifecycle, which set out behaviours to improve cybersecurity at all levels:
- Secure design
- i) Raise staff awareness of threats & risks;
- ii) Model the threats to your system;
- iii) Design your system for security, functionality and performance; and
- iv) Consider security benefits and trade-offs when selecting your AI model.
- Secure development
- i) Secure your supply chain;
- ii) Identify, track and protect your assets;
- iii) Document your data, models and prompts; and
- iv) Manage your technical debt.
- Secure deployment
- i) Secure your infrastructure;
- ii) Protect your model continuously;
- iii) Develop incident management procedures;
- iv) Release AI responsibly; and
- v) Make it easy for users to do the right things.
- Secure operation and maintenance
- i) Monitor your system’s behaviour;
- ii) Monitor your system’s input;
- iii) Follow a secure by design approach to updates; and
- iv) Collect and share lessons learned.
The Guidelines build on practices from the NCSC’s ‘Secure Development and Deployment Guidance’, NIST’s ‘Secure Software Development Framework’ and the ’Secure by Design’ principles published by CISA, the NCSC and international cyber agencies, which collectively emphasize:
- Taking ownership of security outcomes for customers;
- Committing to accountability and transparency; and
- Building organisational structure and leadership to ensure that secure by design is a top business priority.
In parallel, the European Union Agency for Cybersecurity (ENISA) has published a ‘Multilayer Framework for Good Cybersecurity Practices for AI’ for EU member states and bodies in June 2023, which sets out recommendations to enhance cybersecurity throughout the AI system lifecycle. Please see our blog about this for further information.
What’s next? While the Guidelines are primarily aimed at providers of AI systems, the NCSC and CISA advise that all stakeholders, including developers, decision-makers, data scientists, managers and risk owners, review the Guidelines and make informed decisions with respect to each stage of the AI system lifecycle.