ARTICLE
28 October 2025

Integrating Artificial Intelligence Into Business Valuation: Methodologies, Risks, And Standards

AC
Ankura Consulting Group LLC

Contributor

Ankura Consulting Group, LLC is an independent global expert services and advisory firm that delivers end-to-end solutions to help clients at critical inflection points related to conflict, crisis, performance, risk, strategy, and transformation. Ankura consists of more than 1,800 professionals and has served 3,000+ clients across 55 countries. Collaborative lateral thinking, hard-earned experience, and multidisciplinary capabilities drive results and Ankura is unrivalled in its ability to assist clients to Protect, Create, and Recover Value. For more information, please visit, ankura.com.
Artificial intelligence is a powerful set of technologies that has evolved from basic automation tasks to more advanced functions and systems...
United States Technology
Ankura Consulting Group LLC are most popular:
  • within Insurance, Wealth Management and Tax topic(s)

What Is Artificial Intelligence (AI)

Artificial intelligence is a powerful set of technologies that has evolved from basic automation tasks to more advanced functions and systems, becoming increasingly pervasive in business and finance applications. For business valuators, the introduction of AI technologies into the appraisal toolkit introduces important questions involving not just technical capabilities and efficiency, but also professional and ethical standards.

The Evolving Relationship Between AI and Business Valuation (BV)

The integration of artificial intelligence into business valuation practices marks a significant shift in how valuation professionals determine company worth, analyze markets, and prepare valuation reports. AI systems increasingly use third-party data, presenting valuation professionals with challenging legal, ethical, and practical considerations.

Artificial intelligence is reshaping traditional valuation methods by enabling better data analysis, automating routine tasks, and discovering insights previously unnoticed. AI tools can swiftly process extensive financial data, industry metrics, and market information, improving the accuracy and depth of valuation reports. However, the reliability of AI in valuation depends heavily on data quality and legal clarity regarding its use.

Data validation becomes crucial as inaccurate data inevitably produce flawed results. This is particularly important in business and intellectual property valuations, where inaccurate inputs can have major financial implications. Valuation professionals should always carefully evaluate data to select relevant variables, use suitable algorithms, and calibrate models for optimal outcomes.1

Ethical Standards/BV Standards /Court, Government, and Agency Directives

While the landscape continues to shift, the core professional and ethical tenets that define the appraisal profession—and uphold trust with relevant stakeholders, including clients, regulators, investors, and other users of appraisal reports—remain unchanged (that is, client confidentiality, transparency, professional judgment, and objectivity). Further, as demonstrated by recent government, regulatory, and judicial actions involving AI use, the application of AI in business valuation engagements requires a balanced approach that integrates technological innovation with established ethical, legal, and regulatory frameworks.

This article explores some of the key considerations that might guide practitioners when thinking about utilizing AI in their business valuation engagements.

Government and Regulatory Approaches

In the United States, the regulatory approach to evolving AI technology thus far reflects a consistent theme: AI must be used responsibly. The government's focus centers around data safeguards, intellectual property protection, privacy, and public equity, with additional emphasis placed on risk mitigation, talent development, and ethical infrastructure.

For example, early executive branch actions, such as Executive Order 13859 (2019), Executive Order 13960 (2020), and Executive Order 14110 (2023) position AI as a determinant of U.S. technological dominance, marking the beginnings of a more coordinated national strategy with respect to both its development and use as a competitive advantage and the application of existing legal and regulatory frameworks designed to safeguard citizens and their rights. In particular, Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," addresses key ethical considerations for the development and use of AI.

In addition, the stance the Securities and Exchange Commission (SEC) took—initially, as an internal compliance effort—demonstrates a consistent commitment to a balanced approach that looks to existing legal and regulatory practices. In 2024, the SEC issued its first compliance plan under OMB Memorandum M-24-10, led by Chief AI Officer David Bottom. This initiative established working groups and reporting structures designed to implement AI use cases within the SEC in a controlled and deliberate manner. Commissioner Mark Uyeda, speaking at an SEC-hosted roundtable on AI, offered insight into the agency's philosophy. He argued against an overly prescriptive approach that could result in duplicative or outdated regulation, stating "various forms of AI have been used in financial products and services for decades" and emphasizing the importance of a "technology-neutral approach" to rulemaking. The commissioner's comments highlight a desire to support innovation without undermining the foundations of existing financial regulation or creating new obstacles to technological progress.

Still, regulatory flexibility does not imply a lack of enforcement. The SEC has already sanctioned firms for misrepresenting their AI capabilities, with penalties issued in March 2024, for example, against two investment advisors who overstated their use of the technology. Notably, the issue in these cases was not AI itself, but the failure to represent its use transparently and accurately. Elsewhere, attorneys have received fines for citing fabricated AI-generated case law, reinforcing the expectation that long-standing professional ethical standards (verifiability, honesty, and diligence) continue to hold.

SEC disclosure requirements, too, are adapting. Regulation S-K requires registrants to describe material assumptions and methodologies in filings such as MD&A sections and 10-K risk discussions. If AI plays a meaningful role in the pricing decisions or valuation techniques of filers, such involvement may need to be appropriately described. The emphasis, as ever, is on understandability and attribution, concepts that are long components of regulatory requirements, even before AI. Similarly, where AI influences financial modeling or internal decision-making processes affecting financial reporting, those systems may fall under scrutiny related to Sarbanes-Oxley Section 404. Auditors may evaluate the governance and oversight surrounding AI-enabled tools, just as they would for any system that materially affects financial disclosures.

Similarly, the Federal Trade Commission (FTC) has signaled a similar approach, focusing its enforcement actions on situations involving the alleged use of "AI tools to trick, mislead, or defraud people [which] is illegal." For example, in the case of DoNotPay, a company marketing itself as "the world's first robot lawyer" agreed to an SEC consent order (without admitting nor denying the allegations) to settle charges against it for making deceptive claims about the abilities of its AI chatbot to provide AI-generated legal services.2

The FTC's concerns extend to financial marketing and consumer tools, where it seeks to ensure that claims of AI application or superiority be substantiated and clearly communicated. In areas such as credit scoring or algorithmic underwriting, the FTC has reminded companies that long-standing consumer protections, like those under the Equal Credit Opportunity Act, apply regardless of whether the technology is conventional or AI-driven.

Beyond the above regulatory actions, judicial bodies (and academic institutions) are also weighing in. Courts are sanctioning attorneys for failing to validate AI-generated legal citations, excluding experts for failing to validate analysis driven by AI independently,3 and universities find themselves navigating policy updates around appropriate student use of AI in academic work. These examples, though outside financial regulation, illustrate a broader consensus: AI does not create new ethical standards; it simply brings new contexts in which the existing ones apply.

Valuation Professional Organization Perspectives

Within the valuation profession itself, similar reasoning also guides recent developments. In 2025, the Appraisal Standards Board (ASB) issued a concept paper titled "Generative AI and Appraisal Standards—A Call for Stakeholder Input."4 These documents acknowledge AI's potential influence on appraisal practices but ultimately emphasize that the use of such tools must align with USPAP's core appraisal requirements. Confidentiality, documentation, and the appraiser's judgment remain central. While AI may assist with analysis or report drafting, it is not a substitute for professional judgement, nor does it exempt practitioners from the responsibility to support and explain their conclusions.

Other valuation professional organizations offer similar perspectives. The National Association of Certified Valuators and Analysts (NACVA) established the Artificial Intelligence and Machine Learning Commission (AIMLC) and launched AI Data University", both designed to help practitioners navigate emerging tools while remaining grounded in existing professional standards. The American Society of Appraisers (ASA) held a session titled "AI Revolution: Why It Matters to Appraisers" and published technical guidance focused on practical implications and emerging expectations. The American Institute of Certified Public Accountants (AICPA), as well as other valuation organizations, have likewise contributed education and commentary designed to support the thoughtful use of AI in valuation and forensic work.

Across these efforts, the approach is consistent: There is no need to rewrite the rules. Instead, established ethical and professional standards can be interpreted with an eye toward the specific characteristics of AI—its speed, its scale, its opacity, and its dependency on training data and algorithms.

Key Ethical Considerations

As mentioned previously, key professional and ethical considerations for business valuators when incorporating AI technologies into their engagements include: client confidentiality, transparency, professional judgment, and independence. The paragraphs to follow address these in turn.

Client confidentiality: Many AI platforms store or use data input by users for model refinement, raising concerns about the handling and safeguarding of sensitive and/or client-specific information—particularly with respect to public AI models. While user policies differ by platform, valuation professionals are increasingly expected to understand and evaluate those policies before entering proprietary data. Some organizations may choose to limit AI use by practitioners to public datasets, while others may craft engagement terms that clarify client expectations regarding restrictions to use. These choices, though varied, reflect an ongoing need to apply familiar standards to unfamiliar tools.

Transparency: As AI enters the appraisal workflow, practitioners may look to consider how to communicate its role to clients and stakeholders without overstating its influence or diminishing their own professional input and judgment. Transparency regarding the use of AI, and its influence in the analysis and/or reporting, is an important consideration for practitioners particularly when AI outputs may be interpretive or automated. At the same time, firms may need to weigh how to address questions of output reliability, hallucination risk,5 and model drift, among other things, especially when using AI tools that generate content or suggest valuation assumptions.

Professional judgment: While AI can enhance decision-making processes, business valuations continue to involve significant professional judgement and experience. Business valuation often requires contextual understanding, industry expertise, and nuanced interpretation of data beyond what AI algorithms provide alone. AI algorithms rely on historical data and historical data trends to generate outputs. As the familiar disclaimer cautions, however: "Past performance is not indicative of future results." The need for sound professional judgment remains in the context of complex valuation analysis involving future expectations, risks, and outcomes.

In practice, AI tools may contribute to drafting written content, formatting footnotes, or generating supporting exhibits. Here, too, ethical considerations remain familiar. As with all aspects of valuation reporting, the ultimate responsibility rests with the practitioner. Verifying references, ensuring accuracy, and aligning language with professional norms continues to be part of the process, even if an AI tool assists along the way.

Objectivity: Concerns about data bias and objectivity are also apropos. Like any system trained on historical data, AI models can be susceptible to inherent biases in their algorithmic designs and/or patterns or gaps in their data inputs. If those patterns skew valuation outputs in a particular direction, or omit important factors, they may create unintended consequences. Professionals may implement measures designed to identify, mitigate, and/or disclose such risks, whether through dataset diversification, AI bias audits,6 periodic algorithm refinements, or simply exercising heightened sensitivity when interpreting AI-supported conclusions.

Data Tools

Several tools are available for valuation professionals, especially in the industry data analysis area. While some tools are available for financial analysis of specific company information, this is many times a nonrepetitive process and the data are not homogenous enough for AI to provide useful insight. Many of these tools are used in auditing and forensic accounting to detect patterns in large amounts of data that may indicate fraud or incorrect

transaction processing, such as the tools used to analyze your spending patterns automatically to create potential fraud alerts. Most of these systems are completely automated and are probably the most visible application of AI.

The most popular and widely used tools, according to research the Institute for Mergers, Acquisition, & Alliances (IMAA) did, are S&P Global Market Intelligence (S&P Capital IQ Pro—spglobal.com) and the Kensho Technologies applications, which utilize machine learning models (the width and depth searches of unstructured data discussed earlier) and apply them using natural language data, which are basically a dataset of unstructured textual data (e.g., newspaper articles, company releases and filings, etc.). However, the foregoing detail is unnecessary for users to know, as they purchase a subscription to the S&P application and the details of what happens "under the hood" are hidden from the user.

Bloomberg (bloomberg.com) is already well-known as a premier provider of financial and other data. Its AI model, BloombergGPT, which utilizes these data, is a natural language model (NLM), similar to ChatGPT and other similar approaches where natural language questions (or "prompts") are used to generate analyses. Although it may not have the most cutting-edge, absolute latest algorithms and thinking on methods, it by far has the most data. In fact, it has 50 billion possible search parameters, so the sheer volume of data and relational connections among that data more than exceed any reduction due to lack of search and summarization finesse.

FactSet Mercury (factset.com) utilizes its own AI engine that it has developed, according to the company, for over 40 years. Its engine (which again is "under the hood") is based on large language models, which is a type of natural language processing (NLP), of which the more commonly known application is the general language learning model (such as Google Gemini or Microsoft CoPilot). FactSet, known generally as a data provider, also provides the data for its applications and partners with other data providers as well.

Under the classification of programming tools (which also go by several other names) are low-level applications and systems, such as ChatGPT, Sonnet, and a few others. These are the "engine" that is referred to above. These generally require programming expertise to implement, as they are basically the search and organization portion of the application or system. The programmer must supply the question (prompt) structure, how that is mapped to database search requests (which is written in very nonlanguage-looking syntax), and how the retrieved data are formatted for presentation, as well as the presentation itself. You can experience a bit of this utilizing ChatGPT 4.0 in a low-level mode and using formatted prompts (questions). Books are available to provide instruction on how to construct prompts to interact with these systems, such as Prompt Engineering for Generative AI, James Phoenix and Mike Taylor, ISBN-10 109815343X O'Reilly Media; The Art of Prompt Engineering With ChatGPT: A Hands-On Guide, Nathan Hunter, ISBN-10 1739296710, Independent Publisher; and Prompt Engineering for LLMs: The Art and Science of Building Large Language Model-Based Applications, John Berryman and Albert Ziegler, ISBN-10 1098156153 O'Reilly Media. There are also several "Dummies" guides, which are generally surprisingly well-written and useful. There are literally dozens of books ranging from complete beginner level to advanced large model theoretical programming.

DataRails (datarails.com), which specializes in applications for the financial planning and analysis (FP&A) segment, has several specific online-based applications, including "Insights," which is a language-based tool that generates summaries and analytical reports in natural language. Similar to the way many professionals write reports, each report section is defined, including where data can be found, what data to use specifically, and bounds on the data (such as dates, number of transactions, etc.). The application then provides analysis paragraphs, charts, and other materials, which the user can then specify appropriateness for inclusion in a final summary language-based report. While it may be tempting to think the application is capable of writing an actual report, many nuances affect the results, and those are many times hidden within the processing. Therefore, relying on the application to write a final report is like going to court to provide expert testimony on a report that you haven't seen and don't know what data are in it or whether the calculations are applicable and correct. It is, however, useful to get a starting place to begin a report, to help organize the data into a logical presentation, or even spark the creative language aspect of a report. Datarails also has other tools that perform similar tasks, such as budgeting, projection, storyboarding, and fast data queries, which are unstructured in nature (such as, "how much is the matching percentage on our 401(k) plan?").

Discussing available tools is almost an exercise in futility, as, even by the time this is written, several new companies and applications will have been brought into existence. With each iteration of incremental processing, storage and access capability and ongoing funded research on models to acquire, organize, analyze, summarize, and report unstructured data, a plethora of new applications are created. However, understanding the basic logic behind how the tools work and the approaches a tool uses to accomplish its task can help you quickly evaluate from among the countless tools and packages available.

Artificial Intelligence on Business Valuation: Data Licensing, Legal, and Intellectual Property Considerations

AI adoption in valuation practices: The adoption of generative AI is growing rapidly across various industries, with many organizations actively testing or deploying AI tools.7 In valuation, AI significantly reduces the time spent on financial analysis and report preparation. Yet, integrating AI into valuation involves dealing with technology-specific challenges and a complicated legal landscape, requiring cautious and informed approaches.8

Legal and intellectual property considerations in AI-assisted valuation: Valuation professionals face significant legal hurdles when employing AI tools, notably around copyright and data licensing. Cases involving "data lakes" illustrate the legal risks associated with unlicensed data used to train AI models. Professionals must carefully navigate the use of financial data and market reports, mindful of copyright restrictions. Uncertainty about copyright protections for AI-generated content further complicates valuation practices, potentially affecting outputs like valuation reports.

Data protection and privacy regulations, such as General Data Protection Regulation (GDPR), add complexity. Valuation professionals must ensure their AI systems comply with laws, especially when valuing companies with extensive customer or sensitive data. Issues of data sovereignty and international data transfer regulations demand careful consideration, especially when using cloud-based AI services or working with global clients.9, 10

Intellectual property infringement risks present another significant concern. Professionals must ensure their AI valuation methods do not infringe on existing intellectual property rights. Additionally, valuation methodologies must account for uncertainties around patentability of AI innovations, potentially affecting the valuation of AIcentric businesses.11

Best practices for using third-party licensed data in AI valuation: To manage these legal and ethical challenges, valuation professionals should implement thorough due diligence when selecting third-party data providers. This includes verifying vendors' data rights, assessing data suitability for intended AI applications, and ensuring compliance with intellectual property and privacy laws. Organizations should also develop detailed procurement processes specific to AI tools, including legal reviews and data quality assessments.12

Contracts with data suppliers should clearly allocate risks and responsibilities, explicitly addressing issues such as data bias, intellectual property rights, and compliance obligations. Clear contractual terms covering permitted data use, ownership of derived insights, indemnification against IP claims, and regulatory compliance can help mitigate legal vulnerabilities.13, 14

Data quality directly impacts AI valuation outcomes. High-quality, proprietary data significantly enhances AI model accuracy. For instance, Bloomberg developed BloombergGPT using proprietary financial data, demonstrating the advantages of targeted data sets. Valuation professionals can similarly benefit by using tailored financial datasets for their AI applications.15

Continuous oversight of AI systems is essential because generative AI evolves over time, potentially affecting data output through model drift.16 Regular audits of AI performance and underlying data quality are crucial for maintaining compliance, detecting biases, and ensuring reasonable valuation outcomes.17

Restrictions and risks in utilizing licensed data for AI valuation: Utilizing licensed data in AI valuation also poses inherent restrictions and risks. Traditional licensing agreements often fail to cover AI use cases, creating uncertainty around data use. Increasing recognition of AI's value is leading to rapid changes in licensing terms, potentially restricting data availability or increasing costs. Furthermore, regulatory changes or new legal interpretations may complicate data usage further, requiring professionals to stay vigilant.18

Technical obsolescence and model degradation represent additional risks. The swift pace of AI innovation means current coding can quickly become outdated, necessitating continuous updates and maintenance. Valuation professionals must prepare for potential technological obsolescence and actively manage AI models to maintain the model integrity amid changing market conditions or reporting standards.

Emerging trends and alternative approaches to data acquisition: Emerging trends offer alternative approaches to mitigate risks associated with licensed data. Synthetic data generation, using generative AI models to create fictional yet statistically representative datasets, provides a viable alternative without infringing copyrights. Industry collaborations on data sharing through structured frameworks or secure "clean rooms" enable broader data analysis while managing legal and privacy risks effectively.19

Crowdsourcing20 and public datasets present additional data acquisition methods. While proprietary financial data are often unavailable through crowdsourcing, this approach can effectively gather market sentiment or industry trends, supplementing proprietary data for comprehensive AI-driven valuations.

Conclusion

The integration of artificial intelligence into business valuation represents a pivotal evolution in how valuation professionals approach data analysis, reporting, and client service. From advanced tools that assist in financial modeling and fraud detection to the increasing relevance of ethical and legal frameworks, AI is reshaping not only what is possible, but also what is expected in professional practice.

Across all of the above sections, a consistent theme emerges: While AI enhances efficiency and opens new analytical frontiers, it does not replace the need for human judgment, ethical responsibility, and legal compliance. Whether considering regulatory mandates, court expectations, or intellectual property concerns, valuation professionals must remain vigilant in understanding and managing the implications of AI use.

Moreover, AI's ability to process vast quantities of structured and unstructured data adds value, but that value is only fully realized when paired with high-quality data, transparent methodologies, and professional accountability. The risks of bias, misinformation, or legal missteps are real and demand rigorous oversight.

In sum, AI is a powerful tool—but one that requires thoughtful application. As the valuation profession moves forward, its practitioners must embrace innovation without losing sight of the foundational principles of their work: integrity, objectivity, confidentiality, and sound professional judgment. In doing so, they will not only preserve public trust, but also ensure the responsible advancement of the field in an increasingly digital and data-driven world.

Footnotes

1. gft.com/ca/en/blog/data-valuation-in-ai.

2. ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes.

3. See Matter of Weber, 2024 N.Y. Misc. LEXIS 8609; 2024 NY Slip Op 24258 and bvresources.com/articles/bvwire/court-kosai-assisted-damages-analysis.

4. appraisalfoundation.sharefile.com/share/view/sb66f008a04444344b3580d53762c09d4.

5. Hallucination risk refers to the generation of false outputs from AI models where factually inaccurate or untruthful content is produced with respect to the model's training data or input.

6. Bias is a disproportionate weight in favor of or against something or someone. In an AI context, it's critical to understand whether a biased output is objective or the result of prejudice at some stage in design, development, or operation of the algorithm. Bias can occur anywhere in the AI lifecycle.

7. news.bloomberglaw.com/us-law-week/licensing-generative-ai-tools-gains-momentum-but-pitfalls-abound.

8. linkedin.com/pulse/legal-intellectual-property-implications-ai-valuation-pier-biga-hwfkf.

9. linkedin.com/pulse/ai-business-valuation-revolutionizing-field-precision-couillard-vrb8c.

10. insideglobaltech.com/2020/06/04/10-best-practices-for-artificial-intelligence-related-intellectual-property.

11. techtarget.com/searchcontentmanagement/answer/Is-AI-generated-content-copyrighted.

12. Forvis Mazars IT policy.

13. weforum.org/stories/2024/01/cracking-the-code-generative-ai-and-intellectual-property.

14. jonesday.com/en/insights/2023/04/generative-ai-generates-copyright-concerns.

15. research.aimultiple.com/generative-ai-data.

16. Model drift occurs when a machine learning model's accuracy decreases over time because the statistical properties of the data it encounters in the real world change from what it learned during training.

17. IVS 500—Financial Instruments.

18. datanami.com/2024/09/06/dataset-providers-alliance-releases-extensive-position-paper-on-ai-data-licensing.

19. research.aimultiple.com/generative-ai-data.

20. Crowdsourcing involves obtaining data, information, or opinions from a large, distributed group of people, typically through an online platform.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More