ARTICLE
17 March 2026

NIST Issues Draft Of Cybersecurity Framework Profile For Artificial Intelligence

BL
Borden Ladner Gervais LLP

Contributor

BLG is a leading, national, full-service Canadian law firm focusing on business law, commercial litigation, and intellectual property solutions for our clients. BLG is one of the country’s largest law firms with more than 750 lawyers, intellectual property agents and other professionals in five cities across Canada.
In Dec. 2025, the National Institute of Standards and Technology (NIST) released a draft of the Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile), a new extension of the NIST Cybersecurity Framework (CSF) 2.0.
United States Technology
Hélène Deschamps Marquis’s articles from Borden Ladner Gervais LLP are most popular:
  • with Inhouse Counsel
  • in United States
  • with readers working within the Banking & Credit, Healthcare and Transport industries

In Dec. 2025, the National Institute of Standards and Technology (NIST) released a draft of the Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile), a new extension of the NIST Cybersecurity Framework (CSF) 2.0.

  • While voluntary, NIST standards are often influential benchmarks of what regulators and the industry regard as best practices.
  • Rather than treating AI as just a tool, this new framework positions AI systems as a distinct and evolving cyber risk category, requiring tailored governance, supply-chain scrutiny, operational safeguards, and continuous monitoring.
  • This new Cyber AI Profile focuses on three priorities:
    • Secure: Given that AI systems create new attack surfaces, from the AI system itself to their supply chains and infrastructure, organizations must implement AI-specific safeguards, such as data provenance checks, adversarial-input protections, and model configuration controls, to manage these new risks effectively.
    • Defend: AI can significantly enhance defensive operations by improving detection, triage, and threat intelligence correlation. However, organizations must implement strong validation and human oversight to prevent errors, drift, and over-automation risks.
    • Thwart: Organizations must build resilience and robustness in their defence systems against new cyber threats enabled by AI.

About BLG

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More