Joint guidance on building trust in artificial intelligence through a cyber risk-based approach

The Canadian Centre for Cyber Security Cyber securityThe protection of digital information, as well as the integrity of the infrastructure housing and transmitting digital information. More specifically, cyber security includes the body of technologies, processes, practices and response and mitigation measures designed to protect networks, computers, programs and data from attack, damage or unauthorized access so as to ensure confidentiality, integrity and availability. (Cyber Centre) has joined the French Cyber Security Agency (ANSSI) in releasing joint guidance on a risk-based approach to support trusted artificial intelligence Artificial intelligenceA subfield of computer science that develops intelligent computer programs to behave in a way that would be considered intelligent if observed in a human (e.g. solve problems, learn from experience, understand language, interpret visual scenes). (AI) systems and secure AI supply chains.

AI impacts almost every sector, from defence to energy. While it presents many opportunities for organizations, threat actors can exploit vulnerabilities and jeopardize the use of AI technology. Organizations and stakeholders need to assess the risks linked to their increased reliance on AI and their rapid adoption of large language models (LLMs). Understanding and mitigating these risks is critical to fostering trusted AI development and implementation.

AI systems face the same cyber security threats as any other information system. However, there are AI -specific risks, particularly those related to the central role of data in AI systems, that pose unique challenges to confidentiality ConfidentialityThe ability to protect sensitive information from being accessed by unauthorized people. and integrity IntegrityThe ability to protect information from being modified or deleted unintentionally or when it’s not supposed to be. Integrity helps determine that information is what it claims to be. Integrity also applies to business processes, software application logic, hardware, and personnel. .

Some of the main AI -specific risks to consider are:

  • AI hosting and management infrastructure compromises
  • supply chain attacks
  • lateralization via interconnections between AI systems and IT systems
  • long-term loss of control over information systems
  • malfunction in AI system responses

Deployment of AI systems can open new paths of attack for threat actors. Organizations must conduct an analysis to assess the risks, understand the AI supply chain and identify the appropriate security measures.

This joint guidance provides guidelines for AI users, operators and developers, including:

  • adjusting the autonomy level of the AI system to the risk analysis, business needs and criticality of the actions undertaken
  • mapping the AI supply chain
  • tracking interconnections between AI systems and other information systems
  • continuously monitoring and maintaining AI systems
  • implementing a process to anticipate major technological and regulatory changes
  • identifying new and potential threats
  • providing training and raising awareness

This joint guidance also provides recommended actions, including:

  • prohibiting the use of AI systems to automate critical actions
  • ensuring AI is appropriately integrated into critical processes with safeguards
  • performing a dedicated risk analysis
  • studying the security of each stage of the AI system lifecycle

Read the full joint guidance Building trust in AI through a cyber risk-based approach to learn more.

Date modified: