Engaging with Artificial Intelligence

The Canadian Centre for Cyber Security (Cyber Centre) joined the the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) and the following international partners to provide guidance on how to use artificial intelligence (AI) systems securely:

  • New Zealand National Cyber Security Centre (NCSC-NZ)
  • United Kingdom (UK) National Cyber Security Centre (NCSC-UK)
  • United States (US) Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA)

Engaging with Artificial Intelligence focuses on using AI systems securely. It summarizes some important threats related to AI systems and prompts organizations to consider steps they can take to engage with AI while managing risk. It provides mitigations to assist both organizations that use self-hosted and third-party hosted AI systems.

Like all digital systems, AI presents both opportunities and threats. To take advantage of the benefits of AI securely, all stakeholders involved with these systems such as programmers, end users, senior executives, analysts, and marketers, should take some time to understand what threats apply to them and how those threats can be mitigated.

Some common AI related threats are outlined in this publication. These threats are not presented to dissuade AI use, but rather to assist all AI stakeholders in engaging with AI securely. Cyber security mitigations that can be used to help secure against these AI threats are presented in this publication.

For additional guidance on AI-specific threats, threat actor tactics, risk management, and developing secure AI see the following resources:

Report a problem on this page

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, please contact us.

Date modified: