- with Inhouse Counsel
- with readers working within the Banking & Credit industries
On December 16, 2025, the National Institute of Standards and Technology (NIST) released its preliminary draft Cyber AI Profile (NIST IR 8596, Cybersecurity Framework Profile for Artificial Intelligence), a framework intended to provide organizations navigating adoption of artificial intelligence (AI) tools with guidance on managing AI-related risks. Aligned with NIST's Cybersecurity Framework (CSF) 2.0, the Cyber AI Profile addresses the new cybersecurity risks and opportunities that AI introduces. This preliminary draft provides for a 45-day comment window (until January 30, 2026), allowing NIST to review stakeholder input before releasing an initial public draft.
As AI transitions from experimental pilots to an integral part of daily operations, budgets, and risk management for many U.S. businesses, it is now embedded in products, workflows, and vendor ecosystems. The integration of AI impacts legal, technical, procurement, and governance functions, making cross-functional collaboration essential. Third-party tools and services increasingly incorporate AI, necessitating due diligence and alignment of data use, security requirements, and monitoring expectations. Both adversaries and defenders are leveraging AI; attackers use it to scale phishing and create deepfakes, while defenders employ it for threat detection and response. Despite the significant enterprise-wide challenges presented by AI integration, many organizations lack the dedicated resources to fully manage these new risks, prompting NIST to develop this new guidance after engaging with cybersecurity leaders.
The draft Cyber AI Profile builds upon two of NIST's existing foundational frameworks: CSF 2.0 and the AI Risk Management Framework (AI RMF). The Cyber AI Profile synthesizes these frameworks by applying the structure of CSF 2.0 to AI-specific risks, thereby enabling organizations to secure their AI systems, strengthen their cyber defenses with AI, and prepare for AI-enabled threats. While CSF 2.0 defines high-level cybersecurity outcomes and the AI RMF seeks to improve AI trustworthiness and reduce risk, the draft Cyber AI Profile seeks to integrate AI considerations into a CSF-aligned program. This provides business leaders and technology teams with a common language for setting goals and gives practitioners specific reference points for policies, controls, and vendor expectations without replacing existing security frameworks. Notably, the Cyber AI Profile does not define "AI," allowing the term to be interpreted broadly due to the evolving nature of the field. To aid understanding, it provides examples of AI and defines "AI systems" as any systems using AI capabilities, including stand-alone systems, as well as applications, infrastructure, and organizations that incorporate AI.
The preliminary draft explains how to apply the CSF 2.0 outcomes to AI across three practical "focus areas":
- Securing AI System Components (Secure): Managing cybersecurity challenges when integrating AI into organizational systems and infrastructure.
- Conducting AI-Enabled Cyber Defense (Defend): Using AI to improve cybersecurity while managing the challenges that AI-supporting defensive operations pose, highlighting the need for human oversight to maintain regulatory and legal compliance.
- Thwarting AI-Enabled Cyber Attacks (Thwart): Building resilience against new cyber threats that rely on AI.
The core of the draft is a set of tables aligned to the six CSF "functions" (Govern, Identify, Protect, Detect, Respond, and Recover). Each table discusses AI-specific considerations for each of the three focus areas and a proposed 1 to 3 priority level for each CSF subcategory to guide planning. Sample opportunities are included for the Defend focus area to suggest how AI can help achieve the desired outcomes in each subcategory. The document also contains informative references to additional resources and uses the phrase "standard cybersecurity practices apply" where no unique AI twist is needed.
The draft highlights some of the following considerations unique to AI:
- NIST urges organizations to maintain inventories covering models, agents, application programming interfaces (APIs)/keys, datasets/metadata, and embedded AI integrations/permissions, as well as maps of end to end AI data flows to support boundary enforcement and anomaly detection.
- Data plays a large role in the operation of AI systems. The provenance and integrity of training and input data should be verified as rigorously as with software and hardware.
- Organizations should extend supply chain risk management to model and data supply chains, require AI specific terms in contracts, conduct AI relevant due diligence and continuous monitoring, and include key suppliers in incident planning and response.
- The AI legal, regulatory, and standards landscape is rapidly evolving and will impact the decisions organizations make regarding whether, when, and how to use AI. Organizations need measures in place to maintain awareness of their responsibilities.
- Human accountability and oversight remain essential. Organizations should assign a human owner for AI system actions, define who approves AI assisted defense actions, and ensure human review of AI generated risk content before decisions.
- AI-enabled attacks change the speed, scale, and ease with which attacks can occur, requiring higher prioritization of detection, coordination, response timelines, and resilience planning. Organizations may consider AI-assisted penetration testing and red teaming to keep pace with the speed and scale of AI-enabled cyberattacks.
- AI also introduces new classes of vulnerabilities that need to be accounted for when securing AI applications and using AI systems for defensive purposes. Organizations should include AI-specific attacks such as adversarial input, model evasion, data poisoning, and others in risk assessments.
- As AI threats continue to change rapidly, organizations should standardize AI risk calculation/prioritization methods and revisit AI-related policies, risk tolerance, and risk assessments with more frequency to keep pace with shifting AI system capabilities, evolving regulations, and emerging threats.
- The Cyber AI Profile recommends dedicated lines of communication for AI risks across the enterprise and with third parties to accelerate escalation and alignment during incidents.
- NIST highlights the needs for additional resources and training staff on AI system capabilities, limitations, and evolving threats, as well as integrating these requirements into human resources practices.
NIST issued the Cyber AI Profile as part of its broader Cybersecurity, Privacy, and AI program to help the business community adapt current risk management approaches to AI's realities, while emphasizing that many existing cybersecurity practices remain effective. At a strategic level, the Cyber AI Profile underscores clear leadership accountability, cross-functional teamwork among legal, privacy, procurement, and security stakeholders, and swifter policy updates. It further emphasizes human oversight of AI-mediated actions and extends supply chain diligence to include data provenance and integrity. Operationally, these themes translate to immediate practical actions, such as updating asset inventories, reviewing risk assessments with AI-specific threats, setting more frequent review triggers for policies and risk appetite, applying guardrails and human-in-the-loop controls to AI-assisted tools, and tuning security playbooks for potential AI-accelerated attacks.
In parallel with the Cyber AI Profile, NIST is developing SP 800‑53 "Control Overlays for Securing AI Systems" (COSAiS) to provide implementation‑level guidance complementing the Cyber AI Profile's outcome‑oriented approach so that organizations can prioritize and operationalize AI‑related controls in a coordinated way. While the Cyber AI Profile and COSAiS are available for comment and will likely be further revised and expanded in 2026, businesses should adopt a proactive approach to considering and implementing the guidance available at this stage. Steps that organizations should consider implementing today include performing a gap assessment and targeted updates to their AI policies and incident response plans.
Co - Author: Yusef Abutouq
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.