ARTICLE
25 February 2025

10 Takeaways For Addressing Artificial Intelligence In 10-Ks

LS
Lowenstein Sandler

Contributor

Lowenstein Sandler is a national law firm with over 350 lawyers working from five offices in New York, Palo Alto, New Jersey, Utah, and Washington, D.C. We represent clients in virtually every sector of the global economy, with particular strength in the areas of technology, life sciences, and investment funds.
Over the past year, many companies have begun or continued to incorporate sophisticated artificial intelligence (AI) tools into their business operations and products and services.
United States Technology

(February 21, 2025) - Daniel L. Forman of Lowenstein Sandler LLP discusses key disclosure considerations for securities 10-K filings as companies incorporate AI tools into their business operations.

Over the past year, many companies have begun or continued to incorporate sophisticated artificial intelligence (AI) tools into their business operations and products and services. AI is having a transformative impact across many industries and has the potential to significantly impact a company's financial performance, competitive position, risk exposures, and strategic direction.

In response to this growing importance, the U.S. Securities and Exchange Commission (the SEC) has reminded public companies, through guidance, commentary, and enforcement, that AI-related disclosures must carefully adhere to the fundamental principles of the federal securities laws so that they provide investors with material information and are grounded in reasonable bases.

This article provides 10 key disclosure considerations for companies as they prepare and finalize their Form 10-K filings for the fiscal year ended Dec. 31, 2024.

1. Clearly define AI and describe why material

The SEC has requested in comment letters that companies clearly define "artificial intelligence" and related concepts, including "AI," (https://bit.ly/4316w0B) "generative AI," (https://bit.ly/3QodFAz) "deep learning," (https://bit.ly/3EKaqkn) "large language models," (https://bit.ly/3CX9ys8) and "neural networks," (https://bit.ly/4gKdqun).

Any of these and related concepts that are used in public disclosures should be defined in the context of the specific business and its operations. The SEC may also focus on whether, based on disclosure in a company's annual report and other public statements in press releases and investor presentations, a company's AI use is really material to its business or not.

2. Impact on business strategy, operations, and prospects

In crafting and reviewing AI-related business disclosures, companies should be mindful (and potentially comforted) that, while there are no specific AI-disclosure requirements, the fundamentals of good securities disclosures apply. Any AI-related disclosures should be focused and targeted on how a company's current or proposed use of AI will impact its business strategy, operations, and prospects. Disclosure should not consist of generic or boilerplate statements.

The SEC has frequently requested that companies provide greater specificity and detail when making AI claims. It has been particularly focused, including through enforcement actions (https://bit.ly/3QnVfjr), on the issue of "AI washing," (https://bit.ly/435cHRh) where a company overstates its AI capabilities, uses, or impacts.

Companies should have a reasonable basis for any claims that are made when discussing AI prospects and consider appropriate corresponding risk factors and forward-looking statement disclosures.

3. Research and development

Companies should disclose if they are conducting material research and development in support of developing AI technologies. Companies should be specific about these research and development activities and whether they are meant to improve business operations or are intended to be incorporated into or used in the company's products and services.

For example, if a life sciences company is expending material resources on building an AI platform for drug discovery or a tech company is devoting significant resources to purchase computer services and train proprietary AI systems, it should provide sufficiently detailed disclosures with a reasonable and rational explanation for expected outcomes and risks.

To avoid AI-washing concerns, companies should be mindful not to overstate how much research and development they are doing or the potential results of these activities.

4. Competitive position

In reflecting on their competitive position in an annual report, companies often describe their competitors, the competitive landscape, and how the competition may impact their business.

A company should reflect on how AI may impact its competitive position, either providing it with advantages or putting it at a disadvantage in relation to competitors. In addition, in describing how the company competes in its market, it may be helpful to consider how AI may change both the ways it competes with other businesses and the behaviors of its customers.

5. Regulatory developments

AI has come under regulatory scrutiny, and various jurisdictions have taken different approaches to regulation. Companies should address how emerging AI-specific regulations affect or could affect operations or compliance requirements.

Consider, for example, the European Union's Artificial Intelligence Act (the AI Act) that was formally entered into in August 2024 and that recently started coming into effect by banning certain applications of AI that it deems pose an unacceptable risk to its citizens. Additional provisions are scheduled to come into effect in 2026, with the AI Act being fully operational by 2027.

In the United States, there is currently a patchwork of state and local regulations, including many that touch on specific areas of law and activities, such as the use of AI in employment decisions and the handling of personal data.

6. Risk factors

If the use of AI is material to a company's business operations, products, or services, it should disclose the associated risks fully and clearly in the risk factors section of its annual report. There are a variety of risk categories that AI use could fall under, and companies should consider the following, among others:

  • Competition risks;
  • Operational and business risks;
  • Intellectual property risks;
  • Cybersecurity and privacy risks;
  • Dependence on third-party AI providers and risks related to their operations;
  • Legal, regulatory, and reputational risks.

It's important that companies not only consider whether stand-alone AI risk factors are advisable and should be added but, perhaps more importantly, whether the impact of AI would necessitate changes in existing risk factors that touch upon the above-mentioned areas.

7. Management's discussion and analysis (MD&A) and financial statements

The SEC (https://bit.ly/4gTyxu9) has consistently focused on MD&A in its reviews of periodic reports. Given this scrutiny, companies should emphasize discussing and analyzing known trends, demands, commitments, events, and uncertainties in their MD&A.

If operational plans include significant capital investments in AI capabilities — whether through in-house development or third-party licensing — companies should evaluate whether these investments represent a "trend" and if further discussion of material AI investments or usage that has, or may in the future, materially impact the financial position or operational results is necessary.

Additionally, companies should consider whether there has been a material impact on revenues or income from continuing operations, as well as any events that could significantly alter the relationship between costs and revenues. For instance, if relevant, a company might disclose information about the costs associated with AI research and development, deployment, and maintenance, as well as the revenue generated from AI-driven products or services. If AI usage is altering the nature of expenses or profit margins within the business or its segments, providing this context to investors could be valuable.

8. Cybersecurity disclosures

Companies should consider whether their AI use impacts cybersecurity disclosures that are now required under the SEC's rules and regulations (https://bit.ly/3EKr5nW). For example, if a company's AI systems rely heavily on large datasets that include personal or otherwise sensitive data, it may be important to address how risks particularized to such use are assessed, identified, and managed. A company should also consider the processes it has in place to monitor and manage cybersecurity threats with third parties that provide its AI services.

Additionally, if AI is utilized as a tool to better secure data or protect against cyber threats, it may be helpful to provide that information to investors, including the relevant expertise of officers responsible for overseeing such processes.

9. Corporate governance and human capital

Perhaps certain elements are better saved for the proxy statement, but companies should begin considering as part of their annual reporting process what AI disclosures are advisable in the areas of corporate governance and human capital. If AI has become a material part of business operations, consideration should be given as to whether investors would be well served by a disclosure about the company's risk oversight framework for AI and relevant director and management qualifications. Institutional investors have already begun to monitor these areas.

In addition, companies may disclose whether they have adopted ethical guidelines, principles, or policies for AI usage, or have set up governance frameworks for responsible AI deployment (e.g., minimizing bias or ensuring transparency). These disclosures may be appropriately situated in the broader context of the company's ESG and human capital initiatives.

10. Process considerations

In order to produce accurate disclosures, it's important to have the right disclosure controls and procedures in place. Companies should consider reviewing their existing disclosure controls and procedures to assess whether they are appropriately covering new and evolving AI use.

For example, there may be new internal functions or senior management teams relating to AI use that should be included in the disclosure process. New personnel may need to be added to disclosure committees. Further, there may be additional steps and/or certification procedures to validate and support the accuracy and completeness of the company's AI-related public disclosures.

Conclusion

By addressing these considerations, companies can be more comfortable that their disclosures comply with SEC requirements and provide investors with a clear understanding of the potential opportunities and risks associated with the use of AI in their business.

Originally published by Thomson Reuters Westlaw Today

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More