ARTICLE
23 October 2024

NY DFS Issues Guidance On AI Cyber Risk

WE
Wilson Elser Moskowitz Edelman & Dicker LLP

Contributor

More than 800 attorneys strong, Wilson Elser serves clients of all sizes across multiple industries. It maintains 38 domestic offices, another in London and enjoys more extensive international reach as a founding member of Legalign Global.  The firm is currently ranked 56th in the National Law Journal’s NLJ 500.
On October 16, 2024, the New York State Department of Financial Services (NY DFS) issued important guidance on steps that organizations can take to detect and mitigate cybersecurity risk posed by artificial intelligence (AI).
United States Technology

On October 16, 2024, the New York State Department of Financial Services (NY DFS) issued important guidance on steps that organizations can take to detect and mitigate cybersecurity risk posed by artificial intelligence (AI). While the guidance was addressed to organizations licensed and regulated by NY DFS in the banking, financial services and/or insurance sectors, it provides a useful framework for organizations across all industry sectors that continue to experience heightened risk of cyber threats compounded by advances in and proliferation of evolving AI technology.

AI Cyber Risks

As noted by NY DFS, AI has increased cybersecurity risks of organizations in several ways. First, threat actors are using AI tools to conduct sophisticated and realistic social engineering scams through the use of email (phishing), telephone (vishing) and text messages (smishing). Second, cybercriminals can leverage AI tools that accelerate the speed, scale, number and severity of cyber-attacks on organizations' computer networks and information systems. Moreover, the relative accessibility and ease of use of AI has lowered the barrier to entry for less sophisticated cybercriminals to attack organizations, disrupt their operations and steal sensitive data.

NY DFS also cautions that "[supply] chain vulnerabilities represent another critical area of concern for organizations using AI." AI tools require the collection, sifting and analysis of vast troves of data. As such, any vendors or third-party service providers involved in this process may represent a particular threat that could expose an organization's non-public information.

Mitigating AI-Related Threats

NY DFS Cybersecurity Regulation at 23 NYCRR Part 500 requires licensed entities to conduct a Cybersecurity Risk Assessment and implement minimum cybersecurity standards designed to mitigate cyber threats – including those posed by AI. The recent guidance by NY DFS has useful tips for all organizations that want to take proactive steps to defend against AI-related cyber threats. Some practical tips and guidance are summarized in the table below.

1534332a.jpg

Conclusion

The growing proliferation and adoption of the use of AI tools and applications are both a potential benefit and a risk to organizations across all industry sectors. As the law continues to evolve regarding the regulation of AI technology, organizations are well advised to take stock of their AI-related risks now by adopting and implementing cybersecurity and AI risk assessments, policies and procedures, employee training, and contractual provisions in their vendor contracts that address AI cyber threats and privacy concerns.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More