The Canadian Centre for Cyber Security (Cyber Centre) has joined the French Cyber Security Agency (ANSSI) in releasing joint guidance on a risk-based approach to support trusted artificial intelligence (AI) systems and secure AI supply chains.
AI impacts almost every sector, from defence to energy. While it presents many opportunities for organizations, threat actors can exploit vulnerabilities and jeopardize the use of AI technology. Organizations and stakeholders need to assess the risks linked to their increased reliance on AI and their rapid adoption of large language models (LLMs). Understanding and mitigating these risks is critical to fostering trusted AI development and implementation.
AI systems face the same cyber security threats as any other information system. However, there are AI -specific risks, particularly those related to the central role of data in AI systems, that pose unique challenges to confidentiality and integrity .
Some of the main AI -specific risks to consider are:
- AI hosting and management infrastructure compromises
- supply chain attacks
- lateralization via interconnections between AI systems and IT systems
- long-term loss of control over information systems
- malfunction in AI system responses
Deployment of AI systems can open new paths of attack for threat actors. Organizations must conduct an analysis to assess the risks, understand the AI supply chain and identify the appropriate security measures.
This joint guidance provides guidelines for AI users, operators and developers, including:
- adjusting the autonomy level of the AI system to the risk analysis, business needs and criticality of the actions undertaken
- mapping the AI supply chain
- tracking interconnections between AI systems and other information systems
- continuously monitoring and maintaining AI systems
- implementing a process to anticipate major technological and regulatory changes
- identifying new and potential threats
- providing training and raising awareness
This joint guidance also provides recommended actions, including:
- prohibiting the use of AI systems to automate critical actions
- ensuring AI is appropriately integrated into critical processes with safeguards
- performing a dedicated risk analysis
- studying the security of each stage of the AI system lifecycle
Read the full joint guidance Building trust in AI through a cyber risk-based approach to learn more.