System and information integrity

On this page

 

The controls and activities in the System and information integrity (SI) family support the protection of the integrity of the system components and the data that is processes. They enable an organization to identify, report and correct data and system flaws in a timely manner, to provide protection against malicious code, and to monitor system security alerts and advisories in order to take appropriate actions in response.

SI-01 System and information integrity policy and procedures

Activity

  1. Develop, document, and disseminate to [Assignment: organization-defined personnel or roles]
    1. [Selection (1 or more): Organization-level; Mission/business process-level; System-level] system and information integrity policy that
      1. addresses purpose, scope, roles, responsibilities, management commitment, coordination among organizational entities, and compliance
      2. is consistent with applicable laws, Orders in Council, directives, regulations, policies, standards, and guidelines
    2. procedures to facilitate the implementation of the system and information integrity policy and the associated system and information integrity controls
  2. Designate an [Assignment: organization-defined official] to manage the development, documentation, and dissemination of the system and information integrity policy and procedures
  3. Review and update the current system and information integrity
    1. policy [Assignment: organization-defined frequency] and following [Assignment: organization-defined events]
    2. procedures [Assignment: organization-defined frequency] and following [Assignment: organization-defined events]

Discussion

System and information integrity policy and procedures address the controls in the SI family that are implemented within systems and organizations. The risk management strategy is an important factor in establishing such policies and procedures. Policies and procedures contribute to security and privacy assurance. Therefore, it is important that security and privacy programs collaborate on the development of system and information integrity policy and procedures.

In general, security and privacy program policies and procedures at the organization level are preferable and may remove the need for mission- or system-specific policies and procedures. The policy can be included as part of the general security and privacy policy or be represented by multiple policies that reflect the complex nature of organizations.

Procedures can be established for security and privacy programs, for mission or business processes, and for systems, if needed. Procedures describe how the policies or controls are implemented and can be directed at the individual or role that is the object of the procedure. Procedures can be documented in system security and privacy plans or in one or more separate documents.

Events that may precipitate an update to system and information integrity policy and procedures include assessment or audit findings, security incidents or breaches, or changes in applicable laws, Orders in Council, directives, regulations, policies, standards, and guidelines. Simply restating controls does not constitute an organizational policy or procedure.

Related controls and activities

PM-09, PS-08, SA-08, SI-02, SI-12.

Enhancements

None.

References

TBS Directive on Security Management: Appendix B: Mandatory Procedures for Information Technology Security Control

 

SI-02 Flaw remediation

Control

  1. Identify, report, and correct system flaws
  2. Test software and firmware updates related to flaw remediation for effectiveness and potential side effects before installation
  3. Install security-relevant software and firmware updates within [Assignment: organization-defined time period] of the release of the updates
  4. Incorporate flaw remediation into the organizational configuration management process

Discussion

The need to remediate system flaws applies to all types of software and firmware. Organizations identify systems affected by software flaws, including potential vulnerabilities resulting from those flaws, and report this information to designated organizational personnel with information security and privacy responsibilities. Organizations consider establishing a controlled patching environment for mission-critical systems.

Security-relevant updates include patches, service packs, and malicious code signatures. Organizations also address flaws discovered during assessments, continuous monitoring, incident response activities, and system error handling. By incorporating flaw remediation into configuration management processes, required remediation actions can be tracked and verified.

Organization-defined time periods for updating security-relevant software and firmware may vary based on a variety of risk factors, including the security category of the system, the criticality of the update (i.e., severity of the vulnerability related to the discovered flaw), the organizational risk tolerance, the mission supported by the system, or the threat environment.

Some types of flaw remediation may require more testing than others. Organizations determine the type of testing needed for the specific type of flaw remediation activity under consideration and the types of changes that are to be configuration-managed. Flaw remediation testing analyzes both the effectiveness of addressing security issues and any potential side-effects on functionality, system and system component performance, and operations. When implementing remediation activities, organizations consider the order and timing of updates to validate correct execution within the system environment and to support system and component availability needs (i.e., implementing a staggered deployment strategy). Organizations verify that software and firmware updates come from authorized sources prior to downloading. In testing decisions, organizations consider whether security-relevant software or firmware updates are obtained from authorized sources with appropriate digital signatures.

Related controls and activities

CA-05, CM-03, CM-04, CM-05, CM-06, CM-08, MA-02, RA-05, SA-08, SA-10, SA-11, SI-03, SI-05, SI-07, SI-11.

Enhancements

  • (01) Flaw remediation: Central management
    • Withdrawn: Incorporated into PL-09.
  • (02) Flaw remediation: Automated flaw remediation status
    • Determine if system components have applicable security-relevant software and firmware updates installed using [Assignment: organization-defined automated mechanisms] [Assignment: organization-defined frequency].
    • Discussion: Automated mechanisms can track and determine the status of known flaws for system components.
    • Related controls and activities: CA-07, SI-04.
  • (03) Flaw remediation: Time to remediate flaws and benchmarks for corrective actions
      1. Measure the time between flaw identification and flaw remediation
      2. Establish the following benchmarks for taking corrective actions: [Assignment: organization-defined benchmarks]
    • Discussion: Organizations determine the time it takes on average to correct system flaws after such flaws have been identified and subsequently establish organizational benchmarks (i.e., timeframes) for taking corrective actions. Benchmarks can be established by the type of flaw or the severity of the potential vulnerability if the flaw can be exploited.
    • Related controls and activities: None.
  • (04) Flaw remediation: Automated patch management tools
    • Employ automated patch management tools to facilitate flaw remediation to the following system components: [Assignment: organization-defined system components].
    • Discussion: Using automated tools to support patch management helps to ensure the timeliness and completeness of system patching operations.
    • Related controls and activities: None.
  • (05) Flaw remediation: Automatic software and firmware updates
    • Install [Assignment: organization-defined security-relevant software and firmware updates] automatically to [Assignment: organization-defined system components].
    • Discussion: Due to system integrity and availability concerns, organizations consider the methodology used to carry out automatic updates. Organizations balance the need to ensure that the updates are installed as soon as possible with the need to maintain configuration management and control with any mission or operational impacts that automatic updates might impose (i.e., implementing a staggered deployment strategy).
    • Related controls and activities: None.
  • (06) Flaw remediation: Removal of previous versions of software and firmware
    • Remove previous versions of [Assignment: organization-defined software and firmware components] after updated versions have been installed.
    • Discussion: Previous versions of software or firmware components that are not removed from the system after updates have been installed may be exploited by adversaries. Some products may automatically remove previous versions of software and firmware from the system.
    • Related controls and activities: None.
  • (07) Flaw remediation: Root cause analysis
      1. Conduct root cause analysis to identify the underlying causes of issues or failures
      2. Develop actions to address the root cause of the issue or failure
      3. Implement the actions and monitor the implementation for effectiveness
    • Discussion: Root cause analysis includes a wide range of approaches, tools, and techniques to systematically identify the underlying causes of issues or failures in systems and systems components (i.e., hardware, software, and firmware). Organizations consider the severity of the incident to determine what root cause analysis method should be used and how quickly to implement remediation actions. The root cause analysis includes a timeline, missed warning signs, key decisions, gaps, mitigations, and verification of effectiveness. The actions identified to address the source of the issue are implemented and integrated into applicable organizational policies, procedures, and control implementations.
    • Related controls and activities: AC-01, AT-01, AU-01, AU-02, CA-01, CM-01, CP-01, IA-01, IR-01, IR-04, MA-01, MP-01, PE-01, PL-01, PM-01, PS-01, PT-01, RA-01, SA-01, SA-15, SC-01, SI-01, SR-01.

References

 

SI-03 Malicious code protection

Control

  1. Implement [Selection (1 or more): signature based; non-signature based] malicious code protection mechanisms at system entry and exit points to detect and eradicate malicious code
  2. Automatically update malicious code protection mechanisms as new releases are available in accordance with organizational configuration management policy and procedures
  3. Configure malicious code protection mechanisms to
    1. perform periodic scans of the system [Assignment: organization-defined frequency] and real-time scans of files from external sources at [Selection (1 or more): endpoint; network entry and exit points] as the files are downloaded, opened, or executed in accordance with organizational policy
    2. [Selection (1 or more): block malicious code; quarantine malicious code; take [Assignment: organization-defined action]]; and send alert to [Assignment: organization-defined personnel or roles] in response to malicious code detection
  4. Address the receipt of false positives during malicious code detection and eradication and the resulting potential impact on the availability of the system

Discussion

System entry and exit points include firewalls, remote access servers, workstations, email servers, web servers, proxy servers, notebook computers, and mobile devices. Malicious code includes viruses, worms, trojans, and spyware. Malicious code can also be encoded in various formats contained within compressed or hidden files or hidden in files using techniques such as steganography.

Malicious code can be inserted into systems in a variety of ways, including by email, web browsing, and portable storage devices. Malicious code insertions occur through the exploitation of system vulnerabilities. A variety of technologies and methods exist to limit or eliminate the effects of malicious code.

Malicious code protection mechanisms include both signature- and non-signature-based technologies. Non-signature-based detection mechanisms include AI techniques that use heuristics to detect, analyze, and describe the characteristics or behaviour of malicious code and to provide controls against such code for which signatures do not yet exist or for which existing signatures may not be effective.

Malicious code for which active signatures do not yet exist or may be ineffective includes polymorphic malicious code (i.e., code that changes signatures when it replicates). Non-signature-based mechanisms also include reputation-based technologies.

In addition to the above technologies, pervasive configuration management, comprehensive software integrity controls, and anti-exploitation software may be effective in preventing the execution of unauthorized code. Malicious code may be present in COTS software and custom-built software and could include logic bombs, backdoors, and other types of attacks that could affect organizational mission and business functions.

In situations where malicious code cannot be detected by detection methods or technologies, organizations rely on other types of controls, including secure coding practices, configuration management and control, trusted procurement processes, and monitoring practices, to ensure that software does not perform functions other than the functions intended.

Organizations may determine that, in response to the detection of malicious code, different actions may be warranted. For example, organizations can define actions in response to malicious code detection during periodic scans, the detection of malicious downloads, or the detection of maliciousness when attempting to open or execute files.

Related controls and activities

AC-04, AC-19, CM-03, CM-08, IR-04, MA-03, MA-04, PL-09, RA-05, SC-07, SC-23, SC-26, SC-28, SC-44, SI-02, SI-04, SI-07, SI-08, SI-15.

Enhancements

  • (01) Malicious code protection: Central management
    • Withdrawn: Incorporated into PL-09.
  • (02) Malicious code protection: Automatic updates
    • Withdrawn: Incorporated into SI-03.
  • (03) Malicious code protection: Non-privileged users
    • Withdrawn: Incorporated into AC-06(10).
  • (04) Malicious code protection: Updates only by privileged users
    • Update malicious code protection mechanisms only when directed by a privileged user.
    • Discussion: Protection mechanisms for malicious code are typically categorized as security-related software and, as such, are only updated by organizational personnel with appropriate access privileges.
    • Related controls and activities: CM-05.
  • (05) Malicious code protection: Portable storage devices
    • Withdrawn: Incorporated into MP-07.
  • (06) Malicious code protection: Testing and verification
      1. Test malicious code protection mechanisms [Assignment: organization-defined frequency] by introducing known benign code into the system
      2. Verify that the detection of the code and the associated incident reporting occur
    • Discussion: None.
    • Related controls and activities: CA-02, CA-07, RA-05.
  • (07) Malicious code protection: Non-signature-based detection
    • Withdrawn: Incorporated into SI-03.
  • (08) Malicious code protection: Detect unauthorized commands
      1. Detect the following unauthorized operating system commands through the kernel application programming interface on [Assignment: organization-defined system hardware components]: [Assignment: organization-defined unauthorized operating system commands]
      2. [Selection (1 or more): issue a warning; audit the command execution; prevent the execution of the command]
    • Discussion: Detecting unauthorized commands can be applied to critical interfaces other than kernel-based interfaces, including interfaces with virtual machines and privileged applications. Unauthorized operating system commands include commands for kernel functions from system processes that are not trusted to initiate such commands as well as commands for kernel functions that are suspicious even though commands of that type are reasonable for processes to initiate.
      Organizations can define the malicious commands to be detected by a combination of command types, command classes, or specific instances of commands. Organizations can also define hardware components by component type, component, component location in the network, or a combination thereof. Organizations may select different actions for different types, classes, or instances of malicious commands.
    • Related controls and activities: AU-02, AU-06, AU-12.
  • (09) Malicious code protection: Authenticate remote commands
    • Withdrawn: Moved to AC-17(10).
  • (10) Malicious code protection: Malicious code analysis
      1. Employ the following tools and techniques to analyze the characteristics and behaviour of malicious code: [Assignment: organization-defined tools and techniques]
      2. Incorporate the results from malicious code analysis into organizational incident response and flaw remediation processes
    • Discussion: The use of malicious code analysis tools provides organizations with a more in-depth understanding of adversary tradecraft (i.e., tactics, techniques, and procedures) and the functionality and purpose of specific instances of malicious code. Understanding the characteristics of malicious code facilitates effective organizational responses to current and future threats. Organizations can conduct malicious code analyses by employing reverse engineering techniques or by monitoring the behaviour of executing code.
    • Related controls and activities: None.

References

 

SI-04 System monitoring

Control

  1. Monitor the system to detect
    1. attacks and indicators of potential attacks in accordance with the following monitoring objectives: [Assignment: organization-defined monitoring objectives]
    2. unauthorized local, network, and remote connections
  2. Identify unauthorized use of the system through the following techniques and methods: [Assignment: organization-defined techniques and methods]
  3. Invoke internal monitoring capabilities or deploy monitoring devices
    1. strategically within the system to collect organization-determined essential information
    2. at ad hoc locations within the system to track specific types of transactions of interest to the organization
  4. Analyze detected events and anomalies
  5. Adjust the level of system monitoring activity when there is a change in risk to organizational operations and assets, individuals, other organizations, or Canada
  6. Obtain legal opinion regarding system monitoring activities
  7. Provide [Assignment: organization-defined system monitoring information] to [Assignment: organization-defined personnel or roles] [Selection (1 or more): as needed; [Assignment: organization-defined frequency]]

Discussion

System monitoring includes external and internal monitoring. External monitoring includes the observation of events occurring at external interfaces to the system. Internal monitoring includes the observation of events occurring within the system. Organizations monitor systems by observing audit activities in real time or by observing other system aspects such as access patterns, characteristics of access, and other actions. The monitoring objectives guide and inform the determination of the events. System monitoring capabilities are achieved through a variety of tools and techniques, including intrusion detection and prevention systems, malicious code protection software, scanning tools, audit record monitoring software, and network monitoring software.

Depending on the security architecture, the distribution and configuration of monitoring devices may impact throughput at key internal and external boundaries, as well as at other locations across a network due to the introduction of network throughput latency. If throughput management is needed, such devices are strategically located and deployed as part of an established organization-wide security architecture. Strategic locations for monitoring devices include selected perimeter locations and near key servers and server farms that support critical applications.

Monitoring devices are typically employed at the managed interfaces associated with controls SC-07 and AC-17. The information collected is a function of the organizational monitoring objectives and the capability of systems to support such objectives. Specific types of transactions of interest include Hypertext Transfer Protocol (HTTP) traffic that bypasses HTTP proxies.

System monitoring is an integral part of organizational continuous monitoring and incident response programs, and output from system monitoring serves as input to those programs. System monitoring requirements, including the need for specific types of system monitoring, may be referenced in other controls (e.g., AC-02G, AC-02(07), AC-02(12)a, AC-17(01), AU-13, AU-13(01), AU-13(02), CM-03F, CM-06D, MA-03A, MA-04A, SC-05(03)b, SC-07A, SC-07(24)b, SC-18B, SC-43B).

Adjustments to levels of system monitoring are based on law enforcement information, intelligence information, or other sources of information. The legality of system monitoring activities is based on applicable laws, Orders in Council, directives, regulations, policies, standards, and guidelines.

Related controls and activities

AC-02, AC-03, AC-04, AC-08, AC-17, AU-02, AU-06, AU-07, AU-09, AU-12, AU-13, AU-14, CA-07, CM-03, CM-06, CM-08, CM-11, IA-10, IR-04, MA-03, MA-04, PL-09, PM-12, RA-05, RA-10, SC-05, SC-07, SC-18, SC-26, SC-31, SC-35, SC-36, SC-37, SC-43, SI-03, SI-06, SI-07, SR-09, SR-10.

Enhancements

  • (01) System monitoring: System-wide intrusion detection system
    • Connect and configure individual intrusion detection tools into a system-wide intrusion detection system.
    • Discussion: Linking individual intrusion detection tools into a system-wide intrusion detection system provides additional coverage and effective detection capabilities. The information contained in one intrusion detection tool can be shared widely across the organization, making the system-wide detection capability more robust and powerful.
    • Related controls and activities: None.
  • (02) System monitoring: Automated tools and mechanisms for real-time analysis
    • Employ automated tools and mechanisms to support near real-time analysis of events.
    • Discussion: Automated tools and mechanisms include host-based, network-based, transport-based, or storage-based event monitoring tools and mechanisms or SIEM technologies that provide real-time analysis of alerts and notifications generated by organizational systems. Automated monitoring techniques can create unintended privacy risks because automated controls may connect to external or otherwise unrelated systems. The matching of records between these systems may create linkages with unintended consequences. Organizations assess and document these risks in their PIA and make determinations that are in alignment with their privacy program plan.
    • Related controls and activities: PM-23, PM-25.
  • (03) System monitoring: Automated tool and mechanism integration
    • Employ automated tools and mechanisms to integrate intrusion detection tools and mechanisms into access control and flow control mechanisms.
    • Discussion: Using automated tools and mechanisms to integrate intrusion detection tools and mechanisms into access and flow control mechanisms facilitates a rapid response to attacks by enabling the reconfiguration of mechanisms in support of attack isolation and elimination.
    • Related controls and activities: PM-23, PM-25.
  • (04) System monitoring: Inbound and outbound communications traffic
      1. Determine criteria for unusual or unauthorized activities or conditions for inbound and outbound communications traffic
      2. Monitor inbound and outbound communications traffic [Assignment: organization-defined frequency] for [Assignment: organization-defined unusual or unauthorized activities or conditions]
    • Discussion: Unusual or unauthorized activities or conditions related to system inbound and outbound communications traffic includes internal traffic that indicates the presence of malicious code or unauthorized use of legitimate code or credentials within organizational systems or propagating among system components, signaling to external systems, and the unauthorized exporting of information. Evidence of malicious code or unauthorized use of legitimate code or credentials is used to identify potentially compromised systems or system components.
    • Related controls and activities: None.
  • (05) System monitoring: System-generated alerts
    • Alert [Assignment: organization-defined personnel or roles] when the following system-generated indications of compromise or potential compromise occur: [Assignment: organization-defined compromise indicators].
    • Discussion: Alerts may be generated from a variety of sources, including audit records or inputs from malicious code protection mechanisms, intrusion detection or prevention mechanisms, or boundary protection devices such as firewalls, gateways, and routers. Alerts can be automated and may be transmitted by telephone, email, or text message.
      Organizational personnel on the alert notification list can include system administrators, mission or business owners, system owners, information owners/stewards, senior officials in the department’s security governance, appropriate privacy senior officials or executives, system security officers, or privacy officers. In contrast to alerts generated by the system, alerts generated by organizations in SI-04(12) focus on information sources external to the system, such as suspicious activity reports and reports on potential insider threats.
    • Related controls and activities: AU-04, AU-05, PE-06.
  • (06) System monitoring: Restrict non-privileged users
    • Withdrawn: Incorporated into AC-06(10).
  • (07) System monitoring: Automated response to suspicious events
      1. Notify [Assignment: organization-defined incident response personnel (identified by name and/or by role)] of detected suspicious events
      2. Take the following actions upon detection: [Assignment: organization-defined least-disruptive actions to terminate suspicious events]
    • Discussion: Least-disruptive actions include initiating requests for human responses.
    • Related controls and activities: None.
  • (08) System monitoring: Protection of monitoring information
    • Withdrawn: Incorporated into SI-04.
  • (09) System monitoring: Testing of monitoring tools and mechanisms
    • Test intrusion monitoring tools and mechanisms [Assignment: organization-defined frequency].
    • Discussion: It is necessary to test intrusion monitoring tools and mechanisms to ensure that they are operating correctly and continue to satisfy the monitoring objectives of organizations. The frequency and depth of testing depends on the types of tools and mechanisms used by organizations and the methods of deployment.
    • Related controls and activities: None.
  • (10) System monitoring: Visibility of encrypted communications
    • Make provisions so that [Assignment: organization-defined encrypted communications traffic] is visible to [Assignment: organization-defined system monitoring tools and mechanisms].
    • Discussion: Organizations balance the need to encrypt communications traffic to protect data confidentiality with the need to maintain visibility into such traffic from a monitoring perspective. Organizations determine whether the visibility requirement applies to internal encrypted traffic, encrypted traffic intended for external destinations, or a subset of the traffic types.
    • Related controls and activities: None.
  • (11) System monitoring: Analyze communications traffic anomalies
    • Analyze outbound communications traffic at the external interfaces to the system and selected [Assignment: organization-defined internal points within the system] to discover anomalies.
    • Discussion: Organization-defined internal points include subnetworks and subsystems. Anomalies within organizational systems include large file transfers, long-time persistent connections, attempts to access information from unexpected locations, the use of unusual protocols and ports, the use of unmonitored network protocols (e.g., IPv6 usage during IPv4 transition), and attempted communications with suspected malicious external addresses.
    • Related controls and activities: None.
  • (12) System monitoring: Automated organization-generated alerts
    • Alert [Assignment: organization-defined personnel or roles] using [Assignment: organization-defined automated mechanisms] when the following indications of inappropriate or unusual activities with security or privacy implications occur: [Assignment: organization-defined activities that trigger alerts].
    • Discussion: Organizational personnel on the system alert notification list include system administrators, mission or business owners, system owners, the senior agency information security officer, the appropriate privacy senior official or executive, system security officers, or privacy officers.
      Automated organization-generated alerts are the security alerts generated by organizations and transmitted using automated means. The sources for organization-generated alerts are focused on other entities such as suspicious activity reports and reports on potential insider threats. In contrast to alerts generated by the organization, alerts generated by the system in SI-04(05) focus on information sources that are internal to the systems, such as audit records.
    • Related controls and activities: None.
  • (13) System monitoring: Analyze traffic and event patterns
      1. Analyze communications traffic and event patterns for the system
      2. Develop profiles representing common traffic and event patterns
      3. Use the traffic and event profiles to tune system monitoring devices
    • Discussion: Identifying and understanding common communications traffic and event patterns helps organizations provide useful information to system monitoring devices to more effectively identify suspicious or anomalous traffic and events when they occur. Such information can help reduce the number of false positives and false negatives during system monitoring.
    • Related controls and activities: None.
  • (14) System monitoring: Wireless intrusion detection
    • Employ a wireless intrusion detection system (WIDS) to identify rogue wireless devices and to detect attack attempts and potential compromises or breaches to the system.
    • Discussion: Wireless signals may radiate beyond organizational facilities. Organizations proactively search for unauthorized wireless connections, including by conducting thorough scans for unauthorized wireless access points. Wireless scans are not limited to those areas within facilities containing systems but also include areas outside of facilities to verify that unauthorized wireless access points are not connected to organizational systems.
    • Related controls and activities: AC-18, IA-03.
  • (15) System monitoring: Wireless to wireline communications
    • Employ an intrusion detection system to monitor wireless communications traffic as the traffic passes from wireless to wireline networks.
    • Discussion: Wireless networks are inherently less secure than wired networks. For example, wireless networks are more susceptible to eavesdroppers or traffic analysis than wireline networks. When wireless to wireline communications exist, the wireless network could become a port of entry into the wired network. Given that it is easier to gain unauthorized network access from wireless access points than from wired access points, it may be necessary to conduct additional monitoring of traffic transitioning between wireless and wired networks to detect malicious activities. Employing intrusion detection systems to monitor wireless communications traffic helps ensure that the traffic does not contain malicious code prior to transitioning to the wireline network.
    • Related controls and activities: AC-18.
  • (16) System monitoring: Correlate monitoring information
    • Correlate information from monitoring tools and mechanisms employed throughout the system.
    • Discussion: Correlating information from different system monitoring tools and mechanisms can provide a more comprehensive view of system activity. Correlating system monitoring tools and mechanisms that typically work in isolation — including malicious code protection software, host monitoring, and network monitoring — can provide an organization-wide monitoring view and may reveal otherwise unseen attack patterns. Understanding the capabilities and limitations of diverse monitoring tools and mechanisms and how to maximize the use of information generated by those tools and mechanisms can help organizations develop, operate, and maintain effective monitoring programs. The correlation of monitoring information is especially important during the transition from older to newer technologies (e.g., transitioning from IPv4 to IPv6 network protocols).
    • Related controls and activities: AU-06.
  • (17) System monitoring: Integrated situational awareness
    • Correlate information from monitoring physical, cyber, and supply chain activities to achieve integrated, organization-wide situational awareness.
    • Discussion: Correlating monitoring information from a more diverse set of information sources helps to achieve integrated situational awareness. Integrated situational awareness from a combination of physical, cyber, and supply chain monitoring activities enhances the capability of organizations to more quickly detect sophisticated attacks and investigate the methods and techniques employed to carry out such attacks.
      In contrast to SI-04(16), which correlates with the various cyber monitoring information, integrated situational awareness is intended to correlate monitoring beyond the cyber domain. Correlation of monitoring information from multiple activities may help reveal attacks on organizations from threat actors that are operating across multiple attack vectors.
    • Related controls and activities: AU-16, PE-06, SR-02, SR-04, SR-06.
  • (18) System monitoring: Analyze traffic and covert exfiltration
    • Analyze outbound communications traffic at external interfaces to the system and at the following internal points to detect covert exfiltration of information: [Assignment: organization-defined interior points within the system].
    • Discussion: Organization-defined internal points include subnetworks and subsystems. Covert means that can be used to exfiltrate information include steganography.
    • Related controls and activities: None.
  • (19) System monitoring: Risk for individuals
    • Implement [Assignment: organization-defined additional monitoring] of individuals who have been identified by [Assignment: organization-defined sources] as posing an increased level of risk.
    • Discussion: Indications of increased risk from individuals can be obtained from different sources, including personnel records, intelligence agencies, law enforcement organizations, and other sources. The monitoring of individuals is coordinated with the management, legal, security, privacy, and human resources officials who conduct such monitoring. Monitoring is conducted in accordance with applicable laws, Orders in Council, directives, regulations, policies, standards, and guidelines.
    • Related controls and activities: None.
  • (20) System monitoring: Privileged user
    • Implement the following additional monitoring of privileged users: [Assignment: organization-defined additional monitoring].
    • Discussion: Privileged users have access to more sensitive information, including security-related information, than the general user population. Access to such information means that privileged users can potentially do greater damage to systems and organizations than non-privileged users. Therefore, implementing additional monitoring on privileged users helps to ensure that organizations can identify malicious activity at the earliest possible time and take appropriate actions.
    • Related controls and activities: AC-18.
  • (21) System monitoring: Probationary periods
    • Implement the following additional monitoring of individuals during [Assignment: organization-defined probationary period]: [Assignment: organization-defined additional monitoring].
    • Discussion: During probationary periods, employees do not have permanent employment status within organizations. Without such status or access to information that is resident on the system, additional monitoring can help identify any potentially malicious activity or inappropriate behaviour.
    • Related controls and activities: AC-18.
  • (22) System monitoring: Unauthorized network services
      1. Detect network services that have not been authorized or approved by [Assignment: organization-defined authorization or approval processes]
      2. [Selection (1 or more): Audit; Alert [Assignment: organization-defined personnel or roles]] when detected
    • Discussion: Unauthorized or unapproved network services include services in service-oriented architectures that lack organizational verification or validation and may therefore be unreliable or serve as malicious rogues for valid services.
    • Related controls and activities: CM-07.
  • (23) System monitoring: Host-based devices
    • Implement the following host-based monitoring mechanisms at [Assignment: organization-defined system components]: [Assignment: organization-defined host-based monitoring mechanisms].
    • Discussion: Host-based monitoring collects information about the host (or system in which it resides). System components in which host-based monitoring can be implemented include servers, notebook computers, and mobile devices. Organizations may consider employing host-based monitoring mechanisms from multiple product developers or vendors.
    • Related controls and activities: AC-18, AC-19.
  • (24) System monitoring: Indicators of compromise
    • Discover, collect, and distribute to [Assignment: organization-defined personnel or roles], indicators of compromise (IOCs) provided by [Assignment: organization-defined sources].
    • Discussion: IOCs are forensic artifacts from intrusions that are identified on organizational systems at the host or network level. IOCs provide valuable information on systems that have been compromised. IOCs can include the creation of registry key values. IOCs for network traffic include Universal Resource Locator (URL) or protocol elements that indicate malicious code command-and-control (C2) servers. The rapid distribution and adoption of IOCs can improve information security by reducing the time that systems and organizations are vulnerable to the same exploit or attack. Threat indicators, signatures, tactics, techniques, procedures, and other IOCs may be available via governmental and non-governmental cooperatives, including the Forum of Incident Response and Security Teams (FIRST), the Canadian Cyber Threat Exchange (CCTX), and the Cyber Centre as the Computer Emergency Readiness Team (CERT-CA).
    • Related controls and activities: AC-18.
  • (25) System monitoring: Optimize network traffic analysis
    • Provide visibility into network traffic at external and key internal system interfaces to optimize the effectiveness of monitoring devices.
    • Discussion: Encrypted traffic, asymmetric routing architectures, capacity and latency limitations, and transitioning from older to newer technologies (e.g., IPv4 to IPv6 network protocol transition) may result in blind spots for organizations when analyzing network traffic. Collecting, decrypting, pre-processing, and distributing only relevant traffic to monitoring devices can streamline the effectiveness and use of devices and optimize traffic analysis.
    • Related controls and activities: None.

References

 

SI-05 Security alerts, advisories, and directives

Control

  1. Receive system security alerts, advisories, and directives from [Assignment: organization-defined external organizations] on an ongoing basis
  2. Generate internal security alerts, advisories, and directives as necessary
  3. Disseminate security alerts, advisories, and directives to: [Selection (1 or more): [Assignment: organization-defined personnel or roles]; [Assignment: organization-defined elements within the organization]; [Assignment: organization-defined external organizations]]
  4. Implement security directives in accordance with established timeframes or notify the issuing organization of the degree of non-compliance

Discussion

The Cyber Centre generates security alerts and advisories to maintain situational awareness throughout the GC. Security directives are issued by TBS or other designated organizations with the responsibility and authority to issue such directives. Compliance with security directives is essential due to the critical nature of many of these directives and the potential (immediate) adverse effects on organizational operations and assets, individuals, other organizations, and Canada should the directives not be implemented in a timely manner. External organizations include supply chain partners, external mission or business partners, external service providers, and other peer or supporting organizations.

Related controls and activities

PM-15, RA-05, SI-02.

Enhancements

  • (01) Security alerts, advisories, and directives: Automated alerts and advisories
    • Broadcast security alert and advisory information throughout the organization using [Assignment: organization-defined automated mechanisms].
    • Discussion: The significant number of changes to organizational systems and operational environments requires the dissemination of security-related information to a variety of organizational entities that have a direct interest in the success of organizational mission and business functions. Based on information provided by security alerts and advisories, changes may be required at one or more of the 3 levels related to risk management, including the governance level, mission and business process level, and the information system level.
    • Related controls and activities: None.

References

TBS Policy on Government Security

 

SI-06 Security and privacy function verification

Control

  1. Verify the correct operation of [Assignment: organization-defined security and privacy functions]
  2. Perform the verification of the functions specified in SI-06A [Selection (1 or more): [Assignment: organization-defined system transitional states]; upon command by user with appropriate privilege; [Assignment: organization-defined frequency]]
  3. Alert [Assignment: organization-defined personnel or roles] to failed security and privacy verification tests
  4. [Selection (1 or more): Shut the system down; Restart the system; [Assignment: organization-defined alternative action(s)]] when anomalies are discovered

Discussion

Transitional states for systems include system start-up, restart, shutdown, and abort. System notifications include hardware indicator lights, electronic alerts to system administrators, and messages to local computer consoles. In contrast to security function verification, privacy function verification ensures that privacy functions operate as expected and are approved by the appropriate privacy senior official or executive or that privacy attributes are applied or used as expected.

Related controls and activities

CA-07, CM-04, CM-06, SI-07.

Enhancements

  • (01) Security and privacy function verification: Notification of failed security tests
    • Withdrawn: Incorporated into SI-06.
  • (02) Security and privacy function verification: Automation support for distributed testing
    • Implement automated mechanisms to support the management of distributed security and privacy function testing.
    • Discussion: The use of automated mechanisms to support the management of distributed function testing helps ensure the integrity, timeliness, completeness, and efficacy of such testing.
    • Related controls and activities: SI-02.
  • (03) Security and privacy function verification: Report verification results
    • Report the results of security and privacy function verification to [Assignment: organization-defined personnel or roles].
    • Discussion: Organizational personnel with potential interest in the results of the verification of security and privacy functions include systems security officers, senior officials in the department’s security governance, and appropriate privacy senior officials or executives.
    • Related controls and activities: SI-04, SR-04, SR-05.

References

None.

 

SI-07 Software, firmware, and information integrity

Control

  1. Employ integrity verification tools to detect unauthorized changes to the following software, firmware, and information: [Assignment: organization-defined software, firmware, and information]
  2. Take the following actions when unauthorized changes to the software, firmware, and information are detected: [Assignment: organization-defined actions]

Discussion

Unauthorized changes to software, firmware, and information can occur due to errors or malicious activity. Software includes operating systems (with key internal components, such as kernels or drivers), middleware, and applications. Firmware interfaces include Unified Extensible Firmware Interface (UEFI) and Basic Input/Output System (BIOS). Information includes personal information and metadata that contains security and privacy attributes associated with information. Integrity checking mechanisms — including parity checks, cyclical redundancy checks, cryptographic hashes, and associated tools — can automatically monitor the integrity of systems and hosted applications.

Related controls and activities

AC-04, CM-03, CM-07, CM-08, MA-03, MA-04, RA-05, SA-08, SA-09, SA-10, SC-08, SC-12, SC-13, SC-28, SC-37, SI-03, SR-03, SR-04, SR-05, SR-06, SR-09, SR-10, SR-11.

Enhancements

  • (01) Software, firmware, and information integrity: Integrity checks
    • Perform an integrity check of [Assignment: organization-defined software, firmware, and information] [Selection (1 or more): at start-up; at [Assignment: organization-defined transitional states or security-relevant events]; [Assignment: organization-defined frequency]].
    • Discussion: Security-relevant events include the identification of new threats to which organizational systems are susceptible and the installation of new hardware, software, or firmware. Transitional states include system start-up, restart, shutdown, and abort.
    • Related controls and activities: None.
  • (02) Software, firmware, and information integrity: Automated notifications of integrity violations
    • Employ automated tools that notify [Assignment: organization-defined personnel or roles] upon discovering discrepancies during integrity verification.
    • Discussion: Using automated tools to report system and information integrity violations and to notify organizational personnel in a timely matter is essential for effective risk response. Personnel with an interest in system and information integrity violations include mission and business owners, system owners, senior officials in the department’s security governance, the appropriate privacy senior official or executive, system administrators, software developers, systems integrators, information security officers, and privacy officers.
    • Related controls and activities: None.
  • (03) Software, firmware, and information integrity: Centrally managed integrity tools
    • Employ centrally managed integrity verification tools.
    • Discussion: Centrally managed integrity verification tools provide greater consistency in the application of such tools and can facilitate more comprehensive coverage of integrity verification actions.
    • Related controls and activities: AU-03, SI-02, SI-08.
  • (04) Software, firmware, and information integrity: Tamper-evident packaging
    • Withdrawn: Incorporated into SA-12.
  • (05) Software, firmware, and information integrity: Automated response to integrity violations
    • Automatically [Selection (1 or more): shut the system down; restart the system; implement [Assignment: organization-defined controls]] when integrity violations are discovered.
    • Discussion: Organizations may define different integrity checking responses by type of information, specific information, or a combination of both. Types of information include firmware, software, and user data. Specific information includes boot firmware for certain types of machines. The automatic implementation of controls within organizational systems includes reversing the changes, halting the system, or triggering audit alerts when unauthorized modifications to critical security files occur.
    • Related controls and activities: None.
  • (06) Software, firmware, and information integrity: Cryptographic protection
    • Implement cryptographic mechanisms to detect unauthorized changes to software, firmware, and information.
    • Discussion: Cryptographic mechanisms used to protect integrity include digital signatures and the computation and application of signed hashes using asymmetric cryptography, protecting the confidentiality of the key used to generate the hash, and using the public key to verify the hash information. Organizations that employ cryptographic mechanisms should also consider cryptographic key management solutions.
    • Related controls and activities: SC-12, SC-13.
  • (07) Software, firmware, and information integrity: Integration of detection and response
    • Incorporate the detection of the following unauthorized changes into the organizational incident response capability: [Assignment: organization-defined security-relevant changes to the system].
    • Discussion: Integrating detection and response helps ensure that detected events are tracked, monitored, corrected, and available for historical purposes. It is important to maintain historical records to identify and discern adversary actions over an extended period and for possible legal actions. Security-relevant changes include unauthorized changes to established configuration settings or the unauthorized elevation of system privileges.
    • Related controls and activities: AU-02, AU-06, IR-04, IR-05, SI-04.
  • (08) Firmware and information integrity: Auditing capability for significant events
    • Upon detection of a potential integrity violation, provide the capability to audit the event and initiate the following actions: [Selection (1 or more): generate an audit record; alert current user; alert [Assignment: organization-defined personnel or roles]; [Assignment: organization-defined other actions]].
    • Discussion: Organizations select response actions based on types of software, specific software, or information for which there are potential integrity violations.
    • Related controls and activities: AU-02, AU-06, AU-12.
  • (09) Software, firmware, and information integrity: Verify boot process
    • Verify the integrity of the boot process of the following system components: [Assignment: organization-defined system components].
    • Discussion: Ensuring the integrity of boot processes is critical to starting system components in known, trustworthy states. Integrity verification mechanisms provide a level of assurance that only trusted code is executed during boot processes.
    • Related controls and activities: SI-06.
  • (10) Software, firmware, and information integrity: Protection of boot firmware
    • Implement the following mechanisms to protect the integrity of boot firmware in [Assignment: organization-defined system components]: [Assignment: organization-defined mechanisms].
    • Discussion: Unauthorized modifications to boot firmware may indicate a sophisticated, targeted attack. These types of targeted attacks can result in a permanent denial of service or a persistent malicious code presence. These situations can occur if the firmware is corrupted or if the malicious code is embedded within the firmware. System components can protect the integrity of boot firmware in organizational systems by verifying the integrity and authenticity of all updates to the firmware prior to applying changes to the system component and preventing unauthorized processes from modifying the boot firmware.
    • Related controls and activities: SI-06.
  • (11) Software, firmware, and information integrity: Confined environments with limited privileges
    • Withdrawn: Moved to CM-07(06).
  • (12) Software, firmware, and information integrity: Integrity verification
    • Require that the integrity of the following software be verified prior to execution: [Assignment: organization-defined software].
    • Discussion: Organizations verify the integrity of software prior to execution to reduce the likelihood of executing malicious code or programs that contain errors from unauthorized modifications. Organizations consider the source of the software, ensuring the software and updates come from authorized sources and/or sites, and the practicality of approaches for verifying software integrity, including the availability of trustworthy checksums from software developers and vendors.
    • Related controls and activities: CM-11, SI-02.
  • (13) Software, firmware, and information integrity: Code execution in protected environments
    • Withdrawn: Moved to CM-07(07).
  • (14) Software, firmware, and information integrity: Binary or machine-executable code
    • Withdrawn: Moved to CM-07(08).
  • (15) Software, firmware, and information integrity: Code authentication
    • Implement cryptographic mechanisms to authenticate the following software or firmware components prior to installation: [Assignment: organization-defined software or firmware components].
    • Discussion: Cryptographic authentication includes verifying that software or firmware components have been digitally signed using certificates recognized and approved by organizations. Code signing is an effective method to protect against malicious code. Organizations that employ cryptographic mechanisms should also consider cryptographic key management solutions.
    • Related controls and activities: CM-05, SC-12, SC-13.
  • (16) Software, firmware, and information integrity: Time limit on process execution without supervision
    • Prohibit processes from executing without supervision for more than [Assignment: organization-defined time period].
    • Discussion: Placing a time limit on process execution without supervision is intended to apply to processes for which typical or normal execution periods can be determined and situations in which organizations exceed such periods. Supervision includes timers on operating systems, automated responses, and manual oversight and response when system process anomalies occur.
    • Related controls and activities: None.
  • (17) Software, firmware, and information integrity: Runtime application self-protection
    • Implement [Assignment: organization-defined controls] for application self-protection at runtime.
    • Discussion: Runtime application self-protection employs runtime instrumentation to detect and block the exploitation of software vulnerabilities by taking advantage of information from the software in execution. Runtime exploit prevention differs from traditional perimeter-based protections such as guards and firewalls which can only detect and block attacks by using network information without contextual awareness.
      Runtime application self-protection technology can reduce the susceptibility of software to attacks by monitoring its inputs and blocking those inputs that could allow attacks. It can also help protect the runtime environment from unwanted changes and tampering. When a threat is detected, runtime application self-protection technology can prevent exploitation and take other actions (e.g., sending a warning message to the user, terminating the user's session, terminating the application, or sending an alert to organizational personnel). Runtime application self-protection solutions can be deployed in either monitor or protection mode.
    • Related controls and activities: SI-16.

References

 

SI-08 Spam protection

Control

  1. Employ spam protection mechanisms at system entry and exit points to detect and act on unsolicited messages
  2. Update spam protection mechanisms when new releases are available in accordance with organizational configuration management policy and procedures

Discussion

System entry and exit points include firewalls, remote-access servers, email servers, web servers, proxy servers, workstations, notebook computers, and mobile devices. Spam can be transported by different means, including email, email attachments, and web access. Spam protection mechanisms include signature definitions.

Related controls and activities

PL-09, SC-05, SC-07, SC-38, SI-03, SI-04.

Enhancements

  • (01) Spam protection: Central management
    • Withdrawn: Incorporated into PL-09.
  • (02) Spam protection: Automatic updates
    • Automatically update spam protection mechanisms [Assignment: organization-defined frequency].
    • Discussion: Using automated mechanisms to update spam protection mechanisms helps ensure that updates occur on a regular basis and provide the latest content and protection capabilities.
    • Related controls and activities: None.
  • (03) Spam protection: Continuous learning capability
    • Implement spam protection mechanisms with a learning capability to identify legitimate communications traffic more effectively.
    • Discussion: Learning mechanisms include Bayesian filters that respond to user inputs that identify specific traffic as spam or legitimate by updating algorithm parameters and thereby separating types of traffic more accurately.
    • Related controls and activities: None.

References

Implementation guidance: Email domain protection (ITSP.40.065)

 

SI-09 Information input restrictions

Withdrawn: Incorporated into AC-02, AC-03, AC-05, and AC-06.

 

SI-10 Information input validation

Control

Check the validity of the following information inputs: [Assignment: organization-defined information inputs to the system].

Discussion

Checking the valid syntax and semantics of system inputs — including character set, length, numerical range, and acceptable values — verifies that inputs match specified definitions for format and content. For example, if the organization specifies that numerical values between 1 and 100 are the only acceptable inputs for a field in a given application, inputs of “387”, “abc”, or “%K%” are invalid inputs and are not accepted as input to the system.

Valid inputs are likely to vary from field to field within a software application. Applications typically follow well-defined protocols that use structured messages (i.e., commands or queries) to communicate between software modules or system components. Structured messages can contain raw or unstructured data interspersed with metadata or control information.

If software applications use attacker-supplied inputs to construct structured messages without properly encoding such messages, then the attacker could insert malicious commands or special characters that can cause the data to be interpreted as control information or metadata. Consequently, the module or component that receives the corrupted output will perform the wrong operations or otherwise interpret the data incorrectly.

Pre-screening inputs prior to passing them to interpreters prevents the content from being unintentionally interpreted as commands. Input validation ensures accurate and correct inputs and prevents attacks such as cross-site scripting and a variety of injection attacks.

Related controls and activities

None.

Enhancements

  • (01) Information input validation: Manual override capability
      1. Provide a manual override capability for input validation of the following information inputs: [Assignment: organization-defined inputs defined in the base control (SI-10)]
      2. Restrict the use of the manual override capability to only [Assignment: organization-defined authorized individuals]
      3. Audit the use of the manual override capability
    • Discussion: In certain situations, such as during events that are defined in contingency plans, a manual override capability for input validation may be needed. Manual overrides are used only in limited circumstances and with the inputs defined by the organization.
    • Related controls and activities: AC-03, AU-02, AU-12.
  • (02) Information input validation: Review and resolution of errors
    • Review and resolve input validation errors within [Assignment: organization-defined time period].
    • Discussion: Resolution of input validation errors includes correcting systemic causes of errors and resubmitting transactions with corrected input. Input validation errors are those related to the information inputs defined by the organization in the base control (SI-10).
    • Related controls and activities: None.
  • (03) Information input validation: Predictable behaviour
    • Verify that the system behaves in a predictable and documented manner when invalid inputs are received.
    • Discussion: A common vulnerability in organizational systems is unpredictable behaviour when invalid inputs are received. Verifying system predictability helps ensure that the system behaves as expected when invalid inputs are received. This occurs by specifying system responses that allow the system to transition to known states without adverse, unintended side effects. The invalid inputs are those related to the information inputs defined by the organization in the base control (SI-10).
    • Related controls and activities: None.
  • (04) Information input validation: Timing interactions
    • Account for timing interactions among system components in determining appropriate responses for invalid inputs.
    • Discussion: When addressing invalid system inputs received across protocol interfaces, timing interactions become relevant, where one protocol needs to consider the impact of the error response on other protocols in the protocol stack.
      For example, 802.11 standard wireless network protocols do not interact well with Transmission Control Protocols (TCP) when packets are dropped (which could be due to invalid packet input). TCP assumes packet losses are due to congestion, while packets lost over 802.11 links are typically dropped due to noise or collisions on the link. If TCP makes a congestion response, it takes the wrong action in response to a collision event.
      Adversaries may be able to use what appear to be acceptable individual behaviours of the protocols in concert to achieve adverse effects through suitable construction of invalid input. The invalid inputs are those related to the information inputs defined by the organization in the base control (SI-10).
    • Related controls and activities: None.
  • (05) Information input validation: Restrict inputs to trusted sources and approved formats
    • Restrict the use of information inputs to [Assignment: organization-defined trusted sources] and/or [Assignment: organization-defined formats].
    • Discussion: Restricting the use of inputs to trusted sources and in trusted formats applies the concept of authorized or permitted software to information inputs. Specifying known trusted sources for information inputs and acceptable formats for such inputs can reduce the probability of malicious activity. The information inputs are those defined by the organization in the base control (SI-10).
    • Related controls and activities: AC-03, AC-06.
  • (06) Information input validation: Injection prevention
    • Prevent untrusted data injections.
    • Discussion: Untrusted data injections may be prevented using a parameterized interface or output escaping (output encoding). Parameterized interfaces separate data from code so that injections of malicious or unintended data cannot change the semantics of commands being sent. Output escaping uses specified characters to inform the interpreter’s parser whether data is trusted. Preventing untrusted data injections refers to the information inputs defined by the organization in the base control (SI-10).
    • Related controls and activities: AC-03, AC-06.

References

None.

 

SI-11 Error handling

Control

  1. Generate error messages that provide information necessary for corrective actions without revealing information that could be exploited
  2. Reveal error messages only to [Assignment: organization-defined personnel or roles]

Discussion

Organizations consider the structure and content of error messages. The extent to which systems can handle error conditions is guided and informed by organizational policy and operational requirements. Exploitable information includes stack traces and implementation details; erroneous logon attempts with passwords mistakenly entered as the username; mission or business information that can be derived from, if not stated explicitly by, the information recorded; and personal information, such as account numbers, social insurance numbers, and credit card numbers. Error messages may also provide a covert channel for transmitting information.

Related controls and activities

AU-02, AU-03, SC-31, SI-02, SI-15.

Enhancements

None.

References

None.

 

SI-12 Information management and retention

Control

Manage and retain information within the system and information output from the system in accordance with applicable laws, Orders in Council, directives, regulations, policies, standards, guidelines, and operational requirements.

Discussion

Information management and retention requirements cover the full lifecycle of information, in some cases extending beyond system disposal. Information to be retained may also include policies, procedures, plans, reports, data output from control implementation, and other types of administrative information.

Library and Archives Canada provides federal policy and guidance on records retention and schedules. The authority to dispose is granted through a disposition instrument signed by the Librarian and Archivist of Canada. Retention schedules are derived collaboratively between program officials and information management officials and are in line with the goal of retaining personal information for the least amount of time required.

If organizations have a records management office, consider coordinating with records management personnel. Records produced from the output of implemented controls that may require management and retention include, but are not limited to: All XX-01, AC-06(09), AT-04, AU-12, CA-02, CA-03, CA-05, CA-06, CA-07, CA-08, CA-09, CM-02, CM-03, CM-04, CM-06, CM-08, CM-09, CM-12, CM-13, CP-02, IR-06, IR-08, MA-02, MA-04, PE-02, PE-08, PE-16, PE-17, PL-02, PL-04, PL-07, PL-08, PM-05, PM-08, PM-09, PM-18, PM-21, PM-27, PM-28, PM-30, PM-31, PS-02, PS-06, PS-07, PT-02, PT-03, PT-07, RA-02, RA-03, RA-05, RA-08, SA-04, SA-05, SA-08, SA-10, SI-04, SR-02, SR-04, SR-08.

Related controls and activities

AC-16, AU-05, AU-11, CA-02, CA-03, CA-05, CA-06, CA-07, CA-09, CM-05, CM-09, CP-02, IR-08, MP-02, MP-03, MP-04, MP-06, PL-02, PL-04, PM-04, PM-08, PM-09, PS-02, PS-06, PT-02, PT-03, RA-02, RA-03, SA-05, SA-08, SR-02.

Enhancements

  • (01) Information management and retention: Limit personal information elements
    • Limit personal information being processed in the information lifecycle to the following elements of personal information: [Assignment: organization-defined elements of personal information].
    • Discussion: Limiting the retention of personal information throughout the information lifecycle when the information is not needed for operational purposes helps reduce the level of privacy risk created by a system. The information lifecycle includes information creation, collection, use, handling, storage, maintenance, dissemination, disclosure, and disposition. Risk assessments, as well as applicable laws, regulations, and policies, can provide useful inputs to determine which elements of personal information may create risk.
    • GC discussion: The Privacy Regulations stipulate that personal information, if used to make administrative decisions, must be retained for 2 years past the last administrative action, to enable the individual to request access to their personal information. If personal information is aggregated and it is determined that the individual cannot be re-identified using the aggregate data, this information can be held for as long as necessary.
    • Related controls and activities: PM-25.
  • (02) Information management and retention: Minimize personal information in testing, training, and research
    • Use the following techniques to minimize the use of personal information for research, testing, or training: [Assignment: organization-defined techniques].
    • Discussion: Organizations can minimize the risk to an individual’s privacy by employing techniques such as de-identification or synthetic data. Limiting the use of personal information throughout the information lifecycle when the information is not needed for research, testing, or training helps reduce the level of privacy risk created by a system. Risk assessments, as well as applicable laws, regulations, and policies, can provide useful inputs to determine the techniques to use and when to use them.
    • GC discussion: If personal information is to be used for research, testing or training, individuals should be provided prior notice about this use, and the associated (if applicable) PIB should reflect this use of the personal information. If the use of the personal information for these purposes is not considered to be a use consistent with the purpose of collection, consent to use the personal information may be required.
    • Related controls and activities: PM-22, PM-25, SI-19.
  • (03) Information management and retention: Information disposal
    • Use the following techniques to dispose of, destroy, or erase information following the retention period: [Assignment: organization-defined techniques].
    • Discussion: Organizations can minimize both security and privacy risks by disposing of information when it is no longer needed. The disposal or destruction of information applies to originals as well as copies and archived records, including system logs, that may contain personal information.
    • Related controls and activities: None.

References

 

SI-13 Predictable failure prevention

Control

  1. Determine mean time to failure (MTTF) for the following system components in specific environments of operation: [Assignment: organization-defined system components]
  2. Provide substitute system components and a means to exchange active and standby components in accordance with the following criteria: [Assignment: organization-defined MTTF substitution criteria]

Discussion

While MTTF is primarily a reliability issue, predictable failure prevention is intended to address potential failures of system components that provide security capabilities. Failure rates reflect installation-specific considerations rather than the industry average. Organizations define the criteria for the substitution of system components based on the MTTF value with consideration for the potential harm from component failures. The transfer of responsibilities between active and standby components does not compromise safety, operational readiness, or security capabilities. The preservation of system state variables is also critical to help ensure a successful transfer process. Standby components remain available at all times except for maintenance issues or recovery failures in progress.

Related controls and activities

CP-02, CP-10, CP-13, MA-02, MA-06, SA-08, SC-06.

Enhancements

  • (01) Predictable failure prevention: Transferring component responsibilities
    • Take system components out of service by transferring component responsibilities to substitute components no later than [Assignment: organization-defined fraction or percentage] of mean time to failure.
    • Discussion: Transferring primary system component responsibilities to other substitute components prior to primary component failure is important to reduce the risk of degraded or debilitated mission or business functions. Making such transfers based on a percentage of MTTF allows organizations to be proactive based on their risk tolerance. However, the premature replacement of system components can result in the increased cost of system operations.
    • Related controls and activities: None.
  • (02)  Predictable failure prevention: Time limit on process execution without supervision
    • Withdrawn: Incorporated into SI-07(16).
  • (03) Predictable failure prevention: Manual transfer between components
    • Manually initiate transfers between active and standby system components when the use of the active component reaches [Assignment: organization-defined percentage] of the MTTF.
    • Discussion: For example, if the MTTF for a system component is 100 days and the MTTF percentage defined by the organization is 90%, the manual transfer would occur after 90 days.
    • Related controls and activities: None.
  • (04) Predictable failure prevention: Standby component installation and notification
    • If system component failures are detected:
      1. ensure that the standby components are successfully and transparently installed within [Assignment: organization-defined time period]
      2. [Selection (1 or more): Activate [Assignment: organization-defined alarm]; Automatically shut down the system; [Assignment: organization-defined action]]
    • Discussion: Automatic or manual transfer of components from standby to active mode can occur upon the detection of component failures.
    • Related controls and activities: None.
  • (05) Predictable failure prevention: Failover capability
    • Provide [Selection (1): real-time; near real-time] [Assignment: organization-defined failover capability] for the system.
    • Discussion: Failover refers to the automatic switchover to an alternate system upon the failure of the primary system. Failover capability includes incorporating mirrored system operations at alternate processing sites or periodic data mirroring at regular intervals defined by the recovery time periods of organizations.
    • Related controls and activities: CP-06, CP-07, CP-09.

References

None.

 

SI-14 Non-persistence

Control

Implement non-persistent [Assignment: organization-defined system components and services] that are initiated in a known state and terminated [Selection (1 or more): upon end of session of use; periodically at [Assignment: organization-defined frequency]].

Discussion

Implementation of non-persistent components and services mitigates risk from advanced persistent threats (APTs) by reducing the targeting capability of adversaries (i.e., window of opportunity and available attack surface) to initiate and complete attacks. By implementing the concept of non-persistence for selected system components, organizations can provide a trusted, known-state computing resource for a specific time period that does not give adversaries sufficient time to exploit vulnerabilities in organizational systems or operating environments.

Since the APT is a high-end, sophisticated threat with regard to capability, intent, and targeting, organizations assume that, over an extended period, a percentage of attacks will be successful. Non-persistent system components and services are activated as required using protected information and are terminated periodically or at the end of sessions. Non-persistence increases the work factor of adversaries attempting to compromise or breach organizational systems.

Non-persistence can be achieved by refreshing system components, periodically reimaging components, or using a variety of common virtualization techniques. Non-persistent services can be implemented by using virtualization techniques as part of virtual machines or as new instances of processes on physical machines (either persistent or non-persistent).

The benefit of periodic refreshes of system components and services is that it does not require organizations to first determine whether compromises of components or services have occurred (something that may often be difficult to determine). The refresh of selected system components and services occurs with sufficient frequency to prevent the spread or intended impact of attacks, but not with such frequency that it makes the system unstable. Refreshes of critical components and services may be done periodically to hinder the ability of adversaries to exploit optimum windows of vulnerabilities.

Related controls and activities

SC-30, SC-34, SI-21.

Enhancements

  • (01) Non-persistence: Refresh from trusted sources
    • Obtain software and data employed during system component and service refreshes from the following trusted sources: [Assignment: organization-defined trusted sources].
    • Discussion: Trusted sources include software and data from write-once, read-only media or from selected offline secure storage facilities.
    • Related controls and activities: None.
  • (02) Non-persistence: Non-persistent information
      1. [Selection (1): Refresh [Assignment: organization-defined information] [Assignment: organization-defined frequency]; Generate [Assignment: organization-defined information] on demand]
      2. Delete information when no longer needed
    • Discussion: Retaining information longer than it is needed makes the information a potential target for advanced adversaries searching for high-value assets to compromise through unauthorized disclosure, unauthorized modification, or exfiltration. For system-related information, unnecessary retention provides advanced adversaries with information that can assist in their reconnaissance and lateral movement through the system.
    • Related controls and activities: None.
  • (03) Non-persistence: Non-persistent connectivity
    • Establish connections to the system on demand and terminate connections after [Selection (1): completion of a request; a period of non-use].
    • Discussion: Persistent connections to systems can provide advanced adversaries with paths to move laterally through systems and potentially position themselves closer to high-value assets. Limiting the availability of such connections impedes the adversary’s ability to move freely through organizational systems.
    • Related controls and activities: SC-10.

References

None.

 

SI-15 Information output filtering

Control

Validate information output from the following software programs and/or applications to ensure that the information is consistent with the expected content: [Assignment: organization-defined software programs and/or applications].

Discussion

Certain types of attacks, including Structured Query Language (SQL) injections, produce output results that are unexpected or inconsistent with the output results that would be expected from software programs or applications. Information output filtering focuses on detecting extraneous content, preventing such extraneous content from being displayed, and then alerting monitoring tools that anomalous behaviour has been discovered.

Related controls and activities

SI-03, SI-04, SI-11.

Enhancements

None.

References

None.

 

SI-16 Memory protection

Control

Implement the following controls to protect the system memory from unauthorized code execution: [Assignment: organization-defined controls].

Discussion

Some adversaries launch attacks with the intent of executing code in non-executable regions of memory or in memory locations that are prohibited. Controls employed to protect memory include data execution prevention and address space layout randomization. Data execution prevention controls can either be hardware-enforced or software-enforced, with hardware enforcement providing the greater strength of mechanism.

Related controls and activities

AC-25, SC-03, SI-07.

Enhancements

None.

References

None.

 

SI-17 Fail-safe procedures

Control:Implement the indicated fail-safe procedures when the indicated failures occur: [Assignment: organization-defined list of failure conditions and associated fail-safe procedures].

Discussion

Failure conditions include the loss of communications among critical system components or between system components and operational facilities. Fail-safe procedures include alerting operator personnel and providing specific instructions on subsequent steps to take. Subsequent steps may include doing nothing, re-establishing system settings, shutting down processes, restarting the system, or contacting designated organizational personnel.

Related controls and activities

CP-12, CP-13, SC-24, SI-13.

Enhancements

None.

References

None.

 

SI-18 Personal information quality operations

Control

  1. Ensure the accuracy, relevance, timeliness, and completeness of personal information used for an administrative purpose by the organization across the information lifecycle [Assignment: organization-defined frequency]
  2. Correct or delete inaccurate or outdated personal information

Discussion

Personal information quality operations include the steps that organizations take to confirm the accuracy and relevance of personal information throughout the information lifecycle. The information lifecycle includes the creation, collection, use, handling, storage, maintenance, dissemination, disclosure, and disposal of personal information. Personal information quality operations include editing and validating addresses as they are collected or entered into systems using automated address verification look-up APIs.

Checking personal information quality includes the tracking of updates or changes to data over time, which enables organizations to know how and what personal information was changed should erroneous information be identified. The measures taken to protect personal information quality are based on the nature and context of the personal information, how it is to be used, how it was obtained, and the potential de-identification methods employed.

GC discussion

If the personal information is used or intended to be used as part of an administrative purpose, organizations need to ensure its accuracy. The measures taken to validate the accuracy of personal information used to make determinations about the rights, benefits, or privileges of individuals covered under federal programs may be more comprehensive than the measures used to validate personal information used for less sensitive purposes.

Individuals have the right to request that incorrect information be corrected in government records where they believe there is an error or omission. Organizations must notify the individuals of the correction or notation. Organizations use discretion when determining if personal information is to be corrected or deleted based on the scope of requests, the changes sought, the impact of the changes, and laws, regulations, and policies. Organizational personnel consult with the appropriate privacy senior official or executive and legal counsel regarding appropriate instances of correction or deletion.

Related controls and activities

PM-22, PM-24, PT-02, SI-04.

Enhancements

  • (01) Personal information quality operations: Automation support
    • Correct or delete personal information that is inaccurate or outdated, incorrectly determined regarding impact, or incorrectly de-identified using [Assignment: organization-defined automated mechanisms].
    • Discussion: The use of automated mechanisms to improve data quality may inadvertently create privacy risks. Automated tools may connect to external or otherwise unrelated systems, and the matching of records between these systems may create linkages with unintended consequences. Organizations assess and document these risks in their PIAs and make determinations that align with their privacy program plans.
      As data is obtained and used across the information lifecycle, it is important to confirm the accuracy and relevance of personal information. Automated mechanisms can augment existing data quality processes and procedures and enable an organization to better identify and manage personal information in large-scale systems.
      For example, automated tools can greatly improve efforts to consistently normalize data or identify malformed data. Automated tools can also be used to improve the auditing of data and to detect errors that may incorrectly alter personal information or incorrectly associate such information with the wrong individual. Automated capabilities backstop processes and procedures at-scale and enable more fine-grained detection and correction of data quality errors.
    • Related controls and activities: PM-18, RA-08.
  • (02) Personal information quality operations: Data tags
    • Employ data tags to automate the correction or deletion of personal information across the information lifecycle within organizational systems.
    • Discussion: Data tagging personal information includes tags that note handling permissions, authority to collect, de-identification, impact level, information lifecycle stage, and retention or last updated dates. Employing data tags for personal information can support the use of automation tools to correct or delete personal information.
    • Related controls and activities: AC-03, AC-16, SC-16.
  • (03) Personal information quality operations: Collection
    • Collect personal information directly from the individual, wherever possible.
    • Discussion: Individuals or their designated representatives can be sources of correct personal information. Organizations consider contextual factors that may result in individuals providing incorrect personal information. Additional steps may be necessary to validate personal information, depending on how it is to be used, and how it was obtained. The measures taken to validate the accuracy of personal information used to make determinations about the rights, benefits, or privileges of individuals under federal programs may be more comprehensive than the measures taken to validate less sensitive personal information.
    • GC discussion: Organizations shall, wherever possible, collect personal information that is intended to be used for an administrative purpose directly from the individual to whom it relates except where the individual authorizes otherwise or where personal information may be disclosed to the institution from another credible source, unless it would result in inaccurate information; or defeat the purpose or prejudice the use for which information is collected.
    • Related controls and activities: None.
  • (04) Personal information quality operations: Individual requests
    • Correct or delete personal information upon request by individuals or their designated representatives.
    • Discussion: Inaccurate personal information maintained by organizations may cause problems for individuals, especially in those business functions where inaccurate information may result in inappropriate decisions or the denial of benefits and services to individuals. Even correct information, in certain circumstances, can cause problems for individuals that outweigh the benefits of an organization maintaining the information. Organizations use discretion when determining if personal information is to be corrected or deleted based on the scope of requests, the changes sought, the impact of the changes, and laws, regulations, and policies. Organizational personnel consult with the appropriate privacy senior official or executive and legal counsel regarding appropriate instances of correction or deletion.
    • Related controls and activities: None.
  • (05) Personal information quality operations: Notice of correction or deletion
    • Notify [Assignment: organization-defined recipients of personal information] and individuals that the personal information has been corrected or deleted.
    • Discussion: When personal information is corrected or deleted, organizations take steps to ensure that all authorized recipients of such information, and the individual with whom the information is associated or their designated representatives, are informed of the corrected or deleted information.
    • Related controls and activities: None.

References

 

SI-19 De-identification

Control

  1. Remove the following elements of personal information from datasets: [Assignment: organization-defined elements of personal information]
  2. Evaluate [Assignment: organization-defined frequency] for effectiveness of de-identification
  1. Consider the privacy injury if information that may be available in the public enables re-identification of individuals

Discussion

At times, programs may have use for personal information but can achieve their objective without the use of personal identifiers. Where the objective can be achieved without direct identifiers, the information should be de-identified. De-identification is the general term for a process that removes the association between a set of identifying data and the data subject. Its objective is to prevent someone’s personal information from being evident in uses or disclosures.

Many datasets contain personal information that can be used to distinguish or trace an individual’s identity, such as name, social insurance number, date and place of birth, mother’s maiden name, or biometric records. Datasets may also contain other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.

Disposition (deletion) of personal information is one method of de-identification and should be considered when such information is not (or no longer) necessary to satisfy the requirements envisioned for the data and should be removed from datasets by trained individuals. For example, if the dataset is only used to produce aggregate statistics, the identifiers that are not needed for producing those statistics are removed. Removing identifiers improves privacy protection since information that is removed cannot be inadvertently disclosed or improperly used.

Consideration should be given to the potential privacy injury if some information can enable re-identification of the individual, even without direct identifiers. Organizations may be subject to specific de-identification definitions or methods under applicable laws, regulations, or policies.

Re-identification is a residual risk with de-identified data. Re-identification attacks can vary, including combining new datasets or other improvements in data analytics. Maintaining awareness of potential attacks and evaluating for the effectiveness of the de-identification over time support the management of this residual risk. It is recommended to plan for regular, ongoing, and periodic assessment of re-identification risk by revisiting dissemination strategies and by amending the terms of information sharing agreements, where applicable, as circumstances change.

Related controls and activities

MP-06, PM-22, PM-23, PM-24, RA-02, SI-12.

Enhancements

  • (01) De-identification: Collection
    • De-identify the dataset upon collection by not collecting personal information.
    • Discussion: If a dataset is being sourced from another program or entity, limit the information sought to only the personal information that is required. For example, if an organization does not intend to use the social insurance number of an applicant, then the social insurance number should not be collected. Organizations should document the collection, use or disclosure of datasets in an information sharing arrangement or agreement.
    • Related controls and activities: None.
  • (02) De-identification: Archiving
    • Prohibit archiving of personal information elements if those elements in a dataset will not be needed after the dataset is archived.
    • Discussion: Datasets can be archived for many reasons. Unless there is a requirement to retain personal information, datasets should be de-identified prior to archiving. If deletion of the personal information is considered, ensure there is a records disposition standard in place and that deletion of the personal information meets the organization’s retention standards.
    • Related controls and activities: None.
  • (03) De-identification: Release
    • Remove personal information elements from a dataset prior to its release if those elements in the dataset do not need to be part of the data release.
    • Discussion: Prior to releasing a dataset, a data custodian considers the intended uses of the dataset and determines if it is necessary to release personal information. If the personal information is not necessary, the information can be removed using de-identification techniques.
    • GC discussion: Prior to releasing a dataset, a data custodian considers the intended uses of the dataset and determines the originating program’s authority to release, the recipient’s authority for collection and document the use of each personal information data element against the express purpose of the recipient program. If the personal information is not necessary, the information should not be disclosed.
    • Related controls and activities: None.
  • (04)  De-identification: Removal, masking, encryption, hashing, or replacement of direct identifiers
    • Remove, mask, encrypt, hash, or replace direct identifiers in a dataset.
    • Discussion: There are many possible processes for removing direct identifiers from a dataset. Columns in a dataset that contain a direct identifier can be removed. In masking, the direct identifier is transformed into a repeating character, such as XXXXXX or 999999. Identifiers can be encrypted or hashed so that the linked records remain linked.
      In the case of encryption or hashing, algorithms are employed that require the use of a key, including the Advanced Encryption Standard or a Hash-based Message Authentication Code. Implementations may use the same key for all identifiers or use a different key for each identifier. Using a different key for each identifier provides a higher degree of security and privacy. Identifiers can alternatively be replaced with a keyword, including transforming “William Stephenson” to “patient” or replacing it with a surrogate value, such as transforming “William Stephenson” to “Laura Secord.”
    • Related controls and activities: SC-12, SC-13.
  • (05) De-identification: Statistical disclosure control
    • Manipulate numerical data, contingency tables, and statistical findings so that no individual or organization is identifiable in the results of the analysis.
    • Discussion: Many types of statistical analyses can result in the disclosure of information about individuals even if only summary information is provided. For example, if a school that publishes a monthly table with the number of minority students enrolled, reports that it has 10 to 19 such students in January, and subsequently reports that it has 20 to 29 such students in March, then it can be inferred that the student who enrolled in February was a minority. Care should be taken to review all statistical information that is disclosed to ensure the protection of the individual’s personal information.
    • Related controls and activities: None.
  • (06) De-identification: Differential privacy
    • Prevent disclosure of personal information by adding non-deterministic noise to the results of mathematical operations before the results are reported.
    • Discussion: The mathematical definition for differential privacy holds that the result of a dataset analysis should be approximately the same before and after the addition or removal of a single data record (which is assumed to be the data from a single individual).
      In its most basic form, differential privacy applies only to online query systems. However, it can also be used to produce machine-learning statistical classifiers and synthetic data. Differential privacy comes at the cost of decreased accuracy of results, forcing organizations to quantify the trade-off between privacy protection and the overall accuracy, usefulness, and utility of the de-identified dataset. Non-deterministic noise can include adding small, random values to the results of mathematical operations in dataset analysis.
    • Related controls and activities: SC-12, SC-13.
  • (07) De-identification: Validated algorithms and software
    • Perform de-identification using validated algorithms and software that is validated to implement the algorithms.
    • Discussion: Algorithms that appear to remove personal information from a dataset may in fact leave information that is personally identifiable or data that is re-identifiable. Software that is claimed to implement a validated algorithm may contain bugs or implement a different algorithm. Software may de-identify one type of data, such as integers, but not de-identify another type of data, such as floating-point numbers. For these reasons, de-identification is performed using algorithms and software that are validated.
    • Related controls and activities: None.
  • (08) De-identification: Motivated intruder
    • Perform a motivated intruder test on the de-identified dataset to determine if the identified data remains or if the de-identified data can be re-identified.
    • Discussion: A motivated intruder test is a test in which an individual or group takes a data release and specified resources and attempts to re-identify one or more individuals in the de-identified dataset. Such tests specify the amount of inside knowledge, computational resources, financial resources, data, and skills that intruders possess to conduct the tests. A motivated intruder test can determine if the de-identification is insufficient. It can also be a useful diagnostic tool to assess if de-identification is likely to be sufficient. However, the test alone cannot prove that de-identification is sufficient.
    • Related controls and activities: None.

References

 

SI-20 Tainting

Control

Embed data or capabilities in the following systems or system components to determine if organizational data has been exfiltrated or improperly removed from the organization: [Assignment: organization-defined systems or system components].

Discussion

Many cyber-attacks target organizational information, or information that the organization holds on behalf of other entities (e.g., personal information), and exfiltrate that data. In addition, insider attacks and erroneous user procedures can remove information from the system that is in violation of the organizational policies. Tainting approaches can range from passive to active. A passive tainting approach can be as simple as adding false email names and addresses to an internal database. If the organization receives an email at one of the false email addresses, it knows that the database has been compromised. Moreover, the organization knows that the email was sent by an unauthorized entity, so any packets it includes potentially contain malicious code, and that the unauthorized entity may have potentially obtained a copy of the database.

Another tainting approach can include embedding false data or steganographic data in files to enable the data to be found via open-source analysis. Finally, an active tainting approach can include embedding software in the data that is able to “call home,” thereby alerting the organization to its “capture,” and possibly its location, and the path by which it was exfiltrated or removed.

Related controls and activities

AU-13.

Enhancements

None.

References

 

SI-21 Information refresh

Control

Refresh [Assignment: organization-defined information] at [Assignment: organization-defined frequencies] or generate the information on demand and delete the information when no longer needed.

Discussion

Retaining information for longer than it is needed makes the information an increasingly valuable and enticing target for adversaries. Keeping information available for the minimum period of time needed to support organizational missions or business functions reduces the opportunity for adversaries to compromise, capture, and exfiltrate that information.

Related controls and activities

SI-14.

Enhancements

None.

References

 

SI-22 Information diversity

Control

  1. Identify the following alternative sources of information for [Assignment: organization-defined essential functions and services]: [Assignment: organization-defined alternative information sources]
  2. Use an alternative information source for the execution of essential functions or services on [Assignment: organization-defined systems or system components] when the primary source of information is corrupted or unavailable

Discussion

Actions taken by a system service or a function are often driven by the information it receives. Corruption, fabrication, modification, or deletion of that information could impact the ability of the service function to properly carry out its intended actions. By having multiple sources of input, the service or function can continue operation if one source is corrupted or no longer available. It is possible that the alternative sources of information may be less precise or less accurate than the primary source of information. But having such sub-optimal information sources may still provide a sufficient level of quality that the essential service or function can be carried out, even in a degraded or debilitated manner.

Related controls and activities

None.

Enhancements

None.

References

None.

 

SI-23 Information fragmentation

Control

Based on [Assignment: organization-defined circumstances]:

  1. fragment the following information: [Assignment: organization-defined information]
  2. distribute the fragmented information across the following systems or system components: [Assignment: organization-defined systems or system components]

Discussion

One objective of the advanced persistent threat is to exfiltrate valuable information. Once exfiltrated, there is generally no way for the organization to recover the lost information. Therefore, organizations may consider dividing the information into disparate elements and distributing those elements across multiple systems or system components and locations. Such actions will increase the adversary’s work factor to capture and exfiltrate the desired information and, in so doing, increase the probability of detection. The fragmentation of information impacts the organization’s ability to access the information in a timely manner. The extent of the fragmentation is dictated by the impact or classification level (and value) of the information, threat intelligence information received, and whether data tainting is used (i.e., data tainting-derived information about the exfiltration of some information could result in the fragmentation of the remaining information).

Related controls and activities

None.

Enhancements

None.

References

None.

 

SI-400 Dedicated administration workstation

Control

Require any administrative or superuser actions to be performed from a physical workstation that is dedicated to those specific tasks and isolated from all other functions and networks, and especially from any form of Internet access.

Discussion

A dedicated administration workstation (DAW) is typically comprised of a user terminal with a very small selection of software designed for interfacing with the target system. For the purpose of this control, workstation means the system from which you are performing the administration, as opposed to the target system of administration.

The DAW must be hardened for the role, to minimize the likelihood that a superuser’s or administrator’s endpoint may be compromised by any threat actor (which would logically lead to the compromise of the target system). Typical office productivity tools are not required on the DAW. All non-essential applications and services are removed. DAWs are not domain-joined, cannot download patches from the Internet, and cannot update documentation in networked applications.

Jump servers do not replace the requirement for DAWs. Connections through a jump server must be done from a DAW. Jump servers are used to centralize the logging of administrative activities, simplify the network isolation of critical servers, and provide a common handover point for administrators with separation of duties (e.g., where software patches are staged by one administrator for installation by another administrator).

If a VPN is being used to provide the network connectivity, the VPN must automatically exclude all other network access excepting the VPN. The DAW must not be able to access networks other than the network it is administrating. The management interface of the target system must be restricted to a management network and must not be accessible to wider networks such as corporate workstation LANs or the internet. If multiple target networks are administered by the same physical DAW, all networks must have the same security profile (e.g., a single DAW cannot administer both a network for Secret and a network for Protected B).

Virtual terminal software that emulates a simple terminal should also reside on a DAW.

Related controls and activities

AC-04(22), AC-17(400), CM-07, MA-04, SA-08, SC-02, SC-07, SC-08, SC-32.

Enhancements

  • (01) Dedicated administration workstation: Thin client dedicated administration workstation
    • Implement virtualized DAW inside network-isolated physical thin client DAW.
    • Discussion: Administrators who have numerous infrastructures to manage may choose to have a physical thin client DAW which can run multiple virtualized DAWs for different networks.
      The thin client may be employed in a virtual desktop infrastructure (VDI) fashion, where the physical DAW is the hypervisor host for a virtual DAW, or in a network booted thin client fashion, where the operating system of the thin client is loaded from the network to be administered.
      The physical thin client must not have independent network access.
    • Related controls and activities: SA-08, SC-25.
  • (02) Dedicated administration workstation: VPN on carrier private network
    • Connect a DAW to a target network using carrier private networks (e.g., virtual private LAN service (VPLS) or multiprotocol label switching (MPLS)) with VPN encryption.
    • Discussion: When administering a network from a remote location, Internet-addressable VPN gateways and VPN endpoints are vulnerable to attacks by public network-based threat actors. Moving to a carrier private network will eliminate public Internet vulnerabilities and reduces the number of threat actors of concern. However, the network may still be vulnerable to confidentiality or integrity attacks from inside the carrier network. Hence, a VPN is still required inside the carrier network.
    • Related controls and activities: SC-08.
  • (03) Dedicated administration workstation: Local area network
    • Connect a DAW to a target network using only LAN.
    • Discussion: To further decrease the likelihood of a network attack, attach a DAW to the target system using only LAN. The target system can then deny administrative access except on the local segment.
    • Related controls and activities: SA-08.
  • (04) Dedicated administration workstation: Console access only
    • Connect a DAW to the target system using only direct console ports.
    • Discussion: To eliminate the likelihood of a network attack, connect a DAW to the target system directly through local serial ports. The system can then deny administrative access to all networks.
    • Related controls and activities: SA-08.
  • (05) Dedicated administration workstation: Dedicated physical workstation
    • Use a single-purpose physical workstation as the DAW.
    • Discussion: To reduce the possibility of attacks moving laterally between networks being administered, use a physical workstation for each target network, without sharing DAW resources.
    • Related controls and activities: SA-08.
  • (06) Dedicated administration workstation: Heterogeneous administrative access
    • Use a different operating system for the DAW relative to the target system.
    • Discussion: A vulnerability in either a DAW or the target system might be shared between the 2, providing a single attack route for both. If the DAW and target systems have different operating systems, it will require additional effort to compromise both.
    • Related controls and activities: SC-29.

References

None.

 
Date modified: