Levy Olvera • November 27, 2023

Framework Series | UK National Cyber Security Centre (NCSC) Guidelines for secure AI system development

Key highlights and takeaways

In recent years, various organisations, including the National Cyber Security Centre (NCSC) in different countries, have been actively working on developing guidelines and principles for secure AI system development. These guidelines aim to ensure that artificial intelligence technologies are developed and deployed in a secure and responsible manner, mitigating potential risks and threats associated with AI systems.


We had previously talked about The NIST Artificial Intelligence Risk Management

Framework (AI RMF) 1.0 and the milestone it represents.


You also might be interested in:
Framework Series | Artificial Intelligence Risk Management Framework


Now it's time to talk about the recently published Guidelines for secure AI system development by the UK National Cyber Security Centre (NCSC).


The collaboration between the UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and numerous other international partners underscores the global effort and importance placed on ensuring secure and responsible development, deployment, and operation of AI systems.


Key highlights and takeaways from this initiative include:


Purpose and Collaboration: The guidelines aim to establish a "secure by design" approach to AI development, emphasising security as a core requirement throughout the AI system's life cycle. The collaborative effort involves several international agencies, signifying the global recognition of the need for standardised security measures in AI.


Target Audience and Scope: The guidelines target providers of AI systems, encompassing both those developing AI systems from scratch and those utilising tools and services from other providers. Additionally, the guidelines aim to reach various stakeholders involved in AI development, including data scientists, developers, managers, decision-makers, and risk owners.


Focus Areas: The guidelines are structured around the AI system development life cycle, with four key areas highlighted:


Secure design

Secure development

Secure deployment

Secure operation and maintenance


Contributions to AI Ethics and Security: These guidelines add to the growing body of work dedicated to ensuring safe, secure, and trustworthy AI. They complement other international efforts, such as the G7 Hiroshima AI Process, the US Voluntary AI Commitments, and the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence issued by President Biden.


International Collaboration: The guidelines were formulated with input from a wide range of international partners from various countries, demonstrating the collaborative effort to establish a unified approach to AI security.


The commitment to regular reviews and the encouragement of feedback from stakeholders highlight the adaptive nature of these guidelines, ensuring they remain relevant and effective in addressing evolving AI security challenges.


These guidelines provide a comprehensive framework and set of considerations to mitigate security risks associated with AI development, fostering a safer and more reliable AI landscape.


The National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF) 1.0 and the guidelines for secure AI system development, such as those promoted by organisations like the National Cyber Security Centre (NCSC), share a common goal of managing risks associated with artificial intelligence (AI). However, they are distinct frameworks developed by different entities and might have different scopes, approaches, or focuses.


Here's a comparison between the two:


Scope and Focus:


NCSC Guidelines: These guidelines usually focus on promoting best practices and principles for secure AI system development. They may cover a broad range of considerations, including security, transparency, fairness, privacy, and accountability.

NIST AI RMF: The NIST AI RMF is a risk management framework specifically tailored for AI systems. It provides a structured approach to identifying, assessing, and mitigating risks unique to AI, emphasising risk management principles and processes specific to AI technologies.

Development and Structure:


NCSC Guidelines: These guidelines might be developed by cybersecurity or AI experts focusing on secure AI development practices. They could involve a set of principles, best practices, or recommendations.

NIST AI RMF: Developed by NIST, the AI RMF involves a structured framework akin to the traditional NIST Risk Management Framework but tailored for AI. It likely includes a systematic process for identifying AI-related risks, assessing their potential impact, and implementing risk mitigation strategies.

Applicability and Adoption:


NCSC Guidelines: These guidelines may be adopted voluntarily by organisations as a reference for developing and deploying AI systems securely.

NIST AI RMF: NIST frameworks are often influential in government and industry standards. The AI RMF might be used as a more formalised and standardised approach for managing AI-related risks, particularly in sectors that follow NIST standards and guidelines.

While both initiatives aim to enhance the security and risk management aspects of AI, they might differ in their approach, specificity, and intended audience. Organisations interested in secure AI development and risk management may find value in exploring and potentially integrating both frameworks' principles and recommendations to enhance their AI systems' security and resilience.


Source and further reading:


Guidelines for secure AI system development
. (n.d.). https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development


Released: AI security guidelines backed by 18 countries
. (n.d.). Newsfusion. https://go.newsfusion.com/security/item/2256222