Artificial intelligence (AI) uses intelligent computer programs to find patterns in data to make predictions or classifications. AI can be used to perform specific tasks by analyzing data online to replicate human thought processes and decision-making abilities. Machine learning, a subset of AI, uses algorithms and data to understand languages, text and multimedia to help the computer system learn and improve based on its own experience. Deep learning is a subset of machine learning that uses vast volumes of data and a layered structure of algorithms to train a model to make intelligent decisions independently.
On this page
What AI can do
AI already plays a big role in our everyday lives by providing recommendations, information, answers to questions and help with organizing our schedules. Applications like search engines, online shopping and voice assistants on mobile devices or smart speakers create data and feedback for machine learning tools to learn and improve from.
While AI is commonly used as a digital assistant, it can also be used to enhance your organization’s operations. AI can create code, develop procedural steps, optimize workflows and provide metadata and advanced analysis as a cyber security tool. Its availability and capabilities continue to grow and are becoming an increasingly vital component of cyber security.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) have evaluated some security related AI use cases and determined they hold no safety-impacting concerns. Some of these use cases include:
- automated detection of personally identifiable information in cyber security data
- confidence scoring for cyber security threat indicators
- malware reverse engineering
- detection of anomalies in critical infrastructure networks
- detection of anomalies in security operations centre networks
- drafting tailored summaries of medical documents for different publication channels
- chat tools for interacting with, summarizing and searching agency materials and internal content
CISA is continuing to explore new ways to integrate AI tools to improve efficiency and strengthen cyber security. For more details, read CISA Artificial Intelligence Use Cases.
What AI can’t do
AI still faces certain fundamental limitations. For example, it can be quite difficult for AI to use intuition or common sense, adapt to different situations and understand cause and effect. Humans, with their judgement and insight, can handle situations that require more intuitive problem-solving and decision-making skills.
How organizations use AI
Organizations use AI in a variety of ways to enhance their processes and reduce costs. Some common ways AI is used include:
Facial recognition
A leading application of AI that looks at facial features in an image or video to identify or verify the individual.
Process optimization
A properly trained machine learning tool, for example, learning from accurate data, can use the data to give more accurate solutions and perform mundane tasks faster than a human can.
Digital assistants
Chat or voice bots can improve customer service and reduce support costs. Customers can receive help within seconds at any time. These services are often highly personalized and can be based on a user’s preferences and history with the organization.
Fraud detection
Sophisticated machine learning tools can detect fraudulent emails faster than a human can. These tools sort through your inbox and move spam and phishing emails to your junk folder.
Document generation
Software applications use AI-powered tools to create well-structured documents. The application uses user prompts to generate formatted documentation.
Coding
AI uses natural language prompts to automatically generate code, functions and development tasks to increase productivity and streamline data. AI can analyze and offer improvements on existing code to debug and enhance performance.
Data analysis
Using machine learning algorithms, AI can analyze large amounts of data and discover new patterns. This is known as automation and greatly reduces the processing time spent by a data analyst and improves business performance.
Several industries use AI-powered tools to enhance their processes and offer further insight with continuously advancing areas of technology. Some of these industries include:
Healthcare
In the medical industry, AI can help in patient diagnosis and treatment in many ways, for example, through computer-aided diagnostic systems. Machine learning in precision medicine is another highly useful tool and can help predict which treatments are most likely to be successful.
Advertising
AI helps advertising agencies create, optimize and personalize ad campaigns by analyzing user data and delivering content that would appeal to specific audiences. AI reduces the time and cost associated with production through its ability to generate text, images and video content.
Cyber security
AI is useful in detecting new threats to organizations through automation. By using sophisticated algorithms, AI can:
- automate detection of threats such as malware
- run pattern recognition to find relationships between different attack vectors
- provide superior predictive intelligence
The threats involved with AI tools
AI tools are often only as good as the data model they rely upon. The main threats to AI come from compromises to its data. Common methods of compromise include:
Data poisoning attack
This type of attack occurs during a machine learning tool’s training phase. AI tools rely heavily on accurate data for training. When poisoned (inaccurate) data is injected into the training dataset, the learning system may be taught to make mistakes.
Adversarial example
This type of attack occurs after the machine learning tool is trained. The tool is fooled into classifying inputs incorrectly. For example, in the case of autonomous vehicles, an adversarial example could be a slight modification of traffic signs in the physical world (like subtle fading or stickers applied to a stop sign). The modification causes the vehicle’s AI system to misclassify a stop sign as a speed-limit sign. This could seriously impact the safe operation of self-driving vehicles.
Model inversion and membership inference attacks
These scenarios occur when a threat actor queries your organization’s data model. A model inversion attack will reveal the underlying dataset, allowing the threat actor to reproduce the training data. A membership inference attack confirms if a specific data file is part of the training data. Both model inversion and membership inference attacks could compromise the confidentiality and privacy of your training data and expose sensitive information.
How threat actors use AI
Alongside organizations using AI-powered technologies to enhance their business processes, threat actors also use AI to enhance their cyber attack methods. Some AI-related attacks can include:
- using deepfakes to impersonate authority figures
- impersonating legitimate websites
- altering source code for malware to lower detection rates
- processing public imagery and videos to geolocate facilities and identify industrial control systems to analyze and target equipment and connected systems
What else you should know about AI
AI is continuously evolving with new tools and enhanced features. Some other information you should know about AI include:
- AI can detect patterns in data
- AI needs enough data to see the patterns at a high enough frequency or resolution
- AI will have a narrow scope if the data is not diverse
- AI will provide unreliable results if the training data used is not accurate
- data used for training should be complete, diverse and accurate
- missing data may result in some patterns not being discovered, and the patterns that are found might not be accurate
- data that is recorded and collected for quality-control purposes can contain both sensitive and personal information
Many organizations are now using trustworthy AI policies to ensure that their use of AI tools minimize potential biases and unintended consequences, especially regarding the treatment of individuals. Policies may also assist in the development of appropriate protocols for the handling of sensitive and personal information. An example of an AI policy is the Government of Canada’s recently adopted Directive on Automated Decision-Making. If your organization intends to deploy AI, it should consider seeking legal advice to manage the many ethical, privacy, policy, and legal considerations that come from using AI.