Assessing the Effects of NIST's Fresh Cybersecurity, Privacy, and Artificial Intelligence Recommendations
The United States National Institute of Standards and Technology (NIST) has announced the launch of a comprehensive Cybersecurity, Privacy, and AI program. This program aims to provide organizations with guidance on managing the risks associated with artificial intelligence (AI) in a unified and integrated manner.
Addressing AI-Related Risks
The program focuses on three fundamental areas of AI-related risks: securing AI systems and machine learning infrastructures to minimize data leakage, defending against AI-enabled cyberattacks, and leveraging AI to improve cyber defense and privacy protection capabilities.
Data Handling and Model Governance
The program emphasizes the importance of reassessing data management practices to ensure security and privacy are maintained as AI is broadly adopted. Regarding model governance, it encourages organizations to implement controls and oversight mechanisms tailored to AI lifecycle risks, including bias, robustness, explainability, and accountability.
Cross-Functional Collaboration
For effective cross-functional collaboration, NIST’s approach through community profiles fosters a shared taxonomy and consensus-based risk management priorities. This aligns stakeholders (technical, legal, compliance, risk management) around common objectives, promoting ongoing interdisciplinary coordination to embed accountability and regulatory adherence into AI and cybersecurity practices.
Navigating AI Innovation and Security Imperatives
The program will provide industry-tailored frameworks to help organizations navigate the complex intersection of AI innovation and security imperatives. These frameworks are designed to adapt to evolving AI technologies while maintaining trusted, ethical, and compliant organizational practices.
Data Security Challenges
The rapidly evolving threat landscape for AI systems necessitates continuous risk assessments and adaptive security strategies. The new guidance focuses on three main areas of AI data security: data drift, potentially poisoned data, and risks in the data supply chain.
Organizations must establish comprehensive systems to track data transformations throughout its lifecycle, using cryptographically signed records. Controlling privileged access to training data, enforcing least privilege for both human and nonhuman identities, and continuously monitoring for anomalous behavior are important steps for securing AI systems.
Securing AI Systems
Organizations should ensure data used in AI training comes from trusted, reliable sources and use provenance tracking to reliably trace data throughout its lifecycle. Maintaining data integrity during storage and transport requires robust cryptographic measures, such as cryptographic hashes, checksums, and digital signatures.
AI-Specific Incident Response
AI-specific incident response procedures are a critical gap in many organizations' security postures, requiring specialized planning tailored to AI system architectures. The National Security Agency's Artificial Intelligence Security Center (AISC) has released a Cybersecurity Information Sheet - AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems to aid in this area.
In sum, NIST's program integrates cybersecurity, privacy, and AI risk management into a unified framework that guides organizations in secure data handling, robust AI model governance, and effective cross-functional collaboration to manage emerging AI-infused risks and opportunities responsibly and transparently. This holistic approach is designed to adapt to evolving AI technologies while maintaining trusted, ethical, and compliant organizational practices.
[1] NIST Cybersecurity, Privacy, and AI Program: https://www.nist.gov/cybersecurity-privacy-and-ai-program [2] NIST AI Risk Management Framework (AI RMF): https://www.nist.gov/ai-risk-management-framework [3] NIST Community Profiles: https://www.nist.gov/cybersecurity-framework/community-profiles [4] AI Ethics Boards, Data Governance Councils, and Approval Workflows: https://www.nist.gov/cybersecurity-privacy-and-ai-program/ai-compliance-policy-development-and-governance-structures
- The NIST's Cybersecurity, Privacy, and AI program aims to secure AI systems and machine learning infrastructures to minimize data leakage, addressing one of the fundamental areas of AI-related risks in the industry.
- To ensure security and privacy are maintained as AI is broadly adopted, the program encourages organizations to reassess data management practices and implement controls and oversight mechanisms tailored to AI lifecycle risks, such as bias, robustness, explainability, and accountability.
- For effective cross-functional collaboration, NIST’s approach through community profiles fosters a shared taxonomy and consensus-based risk management priorities, aligning stakeholders in the finance, business, technology, cybersecurity, and data-and-cloud-computing sectors around common objectives.
- As part of its industry-tailored frameworks, NIST will provide guidance on AI-specific incident response procedures, which are a critical gap in many organizations' security postures, helping businesses manage emerging AI-infused risks and opportunities responsibly and transparently.