NCSC releases guidelines for secure AI development

New guidance from the National Cyber Security Centre (NCSC) aims to help businesses securely innovate with AI and minimise risks

Providers of internal and external AI systems, operating across the UK and beyond, will be able to use the NCSC document to make more informed decisions about design, deployment and operation of machine learning.

The first of its kind to be produced and agreed globally, the whitepaper has been approved by international agency signatories from 18 countries, including G7 members, Australia and Israel.


Large language models in cybersecurityWith generative AI being a possible a game changer for legitimate businesses and cybercrime gangs alike, we explore the double-edged sword large language models present to cybersecurity.


“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said Linda Cameron, CEO of NCSC.

“These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

Science and Technology Secretary Michelle Donelan commented: “I believe the UK is an international standard bearer on the safe use of AI. The NCSC’s publication of these new guidelines will put cyber security at the heart of AI development at every stage so protecting against risk is considered throughout.

“Just weeks after we brought world-leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort.

“In doing so, we are driving forward in our mission to harness this decade-defining technology and seize its potential to transform our NHS, revolutionise our public services and create the new, high-skilled, high-paid jobs of the future.”

An NCSC-hosted panel discussion around the advice is set to take place today, featuring Microsoft, the Alan Turing Institute and UK, American, Canadian, and German cyber security agencies.

The guidelines

The NCSC’s guidance is split into four main areas within the AI system development life cycle:

  1. secure design — addressing understanding risks and threat modelling in the opening stages;
  2. secure development — including supply chain security, documentation, and asset and technical debt management;
  3. secure deployment — including how to protect infrastructure and models from compromise, threat or loss, as well as exploring responsible release;
  4. secure operation and maintenance — covering logging and monitoring, update management and information sharing.

Top cyber security monitoring tools for your businessHow can your business effectively and cost-efficiently bolster its cyber security defence arsenal? Here are some useful tools.


Design security

At the design stage of AI model development, the NCSC recommends the following measures:

Raise staff awareness

Once security threats are understood by stakeholders in the organisation, data scientists and developers must maintain awareness of said threats and failure modes, to help make informed decisions going forward.

Developers should be trained in secure coding techniques and responsible AI practices, while users need guidance on unique security risks facing AI systems.

Model threats to your system

With possible risks differing from algorithm to algorithm, a holistic process should be applied to effectively assess threats, including potential impacts to the system, users, organisations, and wider society.

Also, assessments should consider possible growth of threats as AI systems increasingly become viewed as high value targets, as well as rising automated cyber attacks.

Balance security with functionality and performance

Considerations should be made around supply chain security, and whether to develop AI systems in-house or via an external API.

Due diligence evaluations should be made before opting to utilise external model providers and/or libraries, taking the partner company’s security posture into account.

Decisions to be made regarding user experience include effective guardrails, the most secure settings possible to be made by default, and users’ requirements to opt into a system following delivery of explanation around riskier capabilities.

Additionally, integrations into existing secure development and operations best practices should be carried out using coding practices and languages that reduce or remove known vulnerabilities, where plausible.

Consider security benefits and trade-offs

Firms must address various requirements including choices of model architecture, configuration, training data, training algorithm and hyperparameters.

Other considerations would are also likely to include amount of parameters involved, suitability for meeting specific business needs, and the ability to align, interpret and explain your model’s outputs.

Development security

Once planning phases are complete, it’s time to move on to security measures for AI model development. Here, the NCSC states:

Secure the supply chain

Assessments and monitoring should be carried out across the system lifecycle, with external suppliers required to adhere to the same standards your own business applies to other software.

Models being developed outside the firm call for acquisition and maintenance of well-secured and well-documented hardware and software components, including data, libraries, modules, middleware, frameworks and external APIs.

In addition, failover measures need to be in place in case of security criteria not being met.

Identify, track and protect assets

Value of all AI-related assets — including models, data (including user feedback), prompts, logs and assessments — must be clearly and widely understood, along with where access to them enables an attacker.

Processes and tools should be in place to track, authenticate, version control and secure all assets, as well as strong backup protocol in the case of compromise.

Document data, models and prompts

Documentation should cover creation, operation, and life cycle management of any models, datasets and system prompts.

This needs to include sources of training data, intended scope and limitations, guardrails, retention time and cryptographic hashes or signatures.

Manage technical debt

Management of technical debt — sprawl of messy code as a result of quicker but limited solutions being utilised — is a risk of any aspect of software development, and businesses developing AI must address this as early as possible.

Stakeholders should ensure that lifecycle plans (including processes to decommission AI systems) assess, acknowledge and mitigate risks to future similar systems.

Deployment security

Next comes the security measures to implement when deploying AI models. According to the NCSC, these entail the following:

Secure infrastructure

Strong infrastructure security principles should be used in every part of the system lifecycle, applying appropriate access controls to APIs, models and data; as well as to training and processing pipelines; and R&D.

Examples of protocol to be considered include appropriate segregation of environments holding sensitive code or data, to allow for mitigation of standard cyber attacks.

Continuously protect models

Firms need to stay one step ahead of attackers looking to reconstruct model functionality or access systems, by continuously validating models through creating and sharing cryptographic hashes and/or signatures.

Where appropriate, privacy-enhancing technologies (such as differential privacy or homomorphic encryption) can be used to explore or assure risk levels associated with consumers, users and attackers.

Develop incident management procedures

Incident management in the form of response, escalation and remediation plans should be widely practiced across the business.

Security plans must reflect a variety of scenarios, and be regularly reassessed as the system and wider research evolves.

Additionally, stakeholders should store critical company digital resources in offline backups, and staff need to be properly trained to assess and address AI-related incidents.

Responsible release

Only after subjecting AI models, applications or systems to appropriate and effective security evaluation — including benchmarking and red teaming — should such products be released.

Also, users should be made clear about known limitations or potential failure modes.

Ensure ease of mitigation for users

Ideally, the most secure settings possible should be maintained by default, and be capable of mitigating common threats.

Controls should be put in place to prevent the use or deployment of your system in malicious ways, with users being guided on appropriate use, as well as how their data will be used and stored, and the aspects of security they are responsible for.

Operation and maintenance security

Lastly, the NCSC makes the following recommendations for properly securing operation and maintenance of AI models:

Monitor behaviour

Outputs and performance of models and systems should be measured, so that sudden and gradual changes in behaviour affecting security can be properly observed.

Firms can account for and identify potential intrusions and compromises, as well as natural data drift.

Monitor all input

Input of data including inference requests, queries and prompts must also be monitored, in line with privacy and data protection requirements.

This will enable compliance obligations, audit, investigation and remediation in the case of compromise or misuse.

Signs of compromise or misuse to be considered include explicit detection of out-of-distribution and/or adversarial inputs.

Utilise a secure-by-design approach

Security by design, including automated updates by default and secure, modular update procedures, is vital for keeping artificial intelligence systems protected.

Testing and evaluation, among other update procedures — reflecting changes to data, models or prompts — can lead to changes in system behaviour, which calls for support provided to users to evaluate and respond to model changes.

Collect and share lessons learned

Stakeholders should get involved in information-sharing communities, collaborating across the global ecosystem of industry, academia and governments, to share best practice as appropriate.

Additionally, open lines of communication for feedback regarding system security, both internally and externally to your organisation, should be maintained, with issues including vulnerability disclosures being shared with wider communities, where necessary.

More information on the new ‘Guidelines for Secure AI System Development‘ from the National Cyber Security Centre (NCSC) can be found here.

Related:

Protecting against cyber attacks backed by generative AIWith threat actors turning to generative AI capabilities to evolve attacks, here’s how businesses can stay protected.

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.