ARTICLE
28 October 2024

NYDFS Speaks Out On AI And Its Cybersecurity Risks

SM
Sheppard Mullin Richter & Hampton

Contributor

Sheppard Mullin is a full service Global 100 firm with over 1,000 attorneys in 16 offices located in the United States, Europe and Asia. Since 1927, companies have turned to Sheppard Mullin to handle corporate and technology matters, high stakes litigation and complex financial transactions. In the US, the firm’s clients include more than half of the Fortune 100.
The New York Department of Financial Services ("NYDFS") recently published guidance on managing cyber risks related to AI for the financial services and insurance industry.
United States New York Technology

The New York Department of Financial Services ("NYDFS") recently published guidance on managing cyber risks related to AI for the financial services and insurance industry. Though the circular letter does not introduce any per se "new" obligations, the guidance speaks to the Agency's expectations for addressing AI within its existing cybersecurity regulations.

The letter identifies specific AI-related cybersecurity threats, such as AI-enabled social engineering. AI may also enhance typical cybersecurity attacks by amplifying the potency, scale, and speed of an attack. The letter also notes that AI modules may leverage large volumes of non-public information and become a target of an attack. Additionally, reliance on third party providers and vendors for AI-tools introduces supply chain vulnerabilities.

To mitigate these risks, NYDFS advises regulated companies to consider the specific risks related to AI when conducting comprehensive risk assessments. These assessments should consider not only the organization's own use of AI, but also any AI technologies used by a third party service provider. Based on findings of the risk assessments, policies, procedures, and incident response plans may need to be updated to sufficiently address these AI-related risks. NYDFS also highlights the need for cybersecurity training for all personnel (including senior executives) that includes awareness around AI-related threats and response strategies.

Putting it into practice: This latest thinking from NYDFS adds to the growing patchwork of regulatory guidance about specific considerations related to AI (here, the cybersecurity risks). Other guidance has largely focused on other types of harm from AI such as bias and discrimination. It also serves as a reminder for companies that might not use AI themselves to be aware of the potential risks of engaging third parties who do and implement proper mitigating measures.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More