UK’s NCSC Publishes “The Guidelines for Secure AI System Development”
The UK’s National Cyber Security Centre (NCSC) just published The Guidelines for Secure AI System Development, which sets out some basic principles for protecting artificial intelligence (AI) and machine learning (ML) systems from cyberthreats. The material is thought-provoking and relevant, but one of the most impressive things about it is the sheer number of entities that contributed to it and endorsed it.
No fewer than 21 other international agencies and ministries, including the NSA, CISA, FBI, and cyber agencies from Israel, France, Poland, Japan, Italy, Germany and many others. Corporations ranging from Google to IBM, Amazon and Microsoft contributed as well, as did RAND Corporation, Stanford University and on and on. When so many organizations come together to share suggested best practices for security, it makes sense to listen.
NCSC CEO Lindy Cameron said, “These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”
The Guidelines are intended to help developers make informed decisions about cybersecurity as they produce new AI systems. Some of what they have to say is pretty basic, but still important, e.g., make sure that infrastructure hosting AI systems are secure. The document does contain some useful insights about risks that are commonly known, however.
For example, the document discusses novel security vulnerabilities affecting AI systems. These include “data poisoning,” which involves injecting bad data into an AI or ML model to cause it to generate unintended outputs. This could be a serious issue when considered in light of existing controversies around AI, such as with prison sentences. Sentencing algorithms are already under attack for racial “machine bias.” That problem is based on use of real, if biased data. How much worse could it be if someone maliciously introduced false, highly biased data into the AI system? That’s the problem these guidelines seek to address.
The guidelines comprise four areas of practice:
- Secure design—which suggests that system designers and developers understand the cyber risks they face and model threats as they embed security into the design.
- Secure development—which relates to developers understanding the security of their software supply chains, along with documentation, and the management of assets and technical debt.
- Secure deployment—which involves protecting infrastructure and models from compromise, threat or loss, and developing responsible releasing processes.
- Secure operation and maintenance—which includes standard security operations processes like logging, monitoring, incident response, and threat sharing.
The recommendations are all great. The question, of course, is whether anyone will follow them. That remains to be seen, but if AI security is like any other branch of security, the answer is that follow through will be inconsistent. As is the case everywhere, resources are not unlimited. And, as the document points out, the pace of development may push security into a secondary position of importance. That would be a mistake, especially in use cases where serious consequences can flow from tainted AI, e.g., war fighting scenarios, medical decision making, criminal justice, and so forth.
Industry experts are weighing in on the Guidelines, with Anurag Gurtu, Chief Product Officer of StrikeReady, remarking, “The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence.”
Troy Batterberry, CEO and founder of EchoMark, advised caution. He said, “While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”
We’ll have to see how well the AI and software industries deal with AI security. These Guidelines represent an important early step in the right direction, however. They are general in nature, offering a fair amount of well-trodden best practices that people don’t always follow. Yet, even if adherence is uneven, it’s essential that we be having these dialogues now. If AI is the future, then AI risk is part of that future. We need to be dealing with it now, and these Guidelines show a path forward.