Technology is rapidly changing the way investment advisers deliver services to their clients. Funds are now using a range of technology solutions, from advanced trading algorithms to artificial intelligence and machine learning, in order to provide better services to their clients. However, regardless of these advances in technology, funds must take into account their obligations to comply with the securities laws and related rules and regulations. Regulators are stepping up their enforcement actions in this area, as shown by the SEC's recent proceedings against two robo-advisers for violations of the antifraud, advertising, and compliance provisions of the Investment Advisers Act in connection with their automated investment management services.
The term "artificial intelligence" is sometimes used loosely to designate a collection of solutions that require different inputs. Here are some key differences that funds should understand, because each technology comes with its own risks:
View the full Artificial Intelligence in Financial Services: Tips for Risk Management infographic here.
The power of AI and advanced data analytics lies in its ability to augment human decision-making. Although computers are increasingly able to perform tasks that have been traditionally associated with human intelligence, there is no truly "autonomous" technology and firms using AI strategies must implement policies and procedures to appropriately test them for efficacy before and after deployment, and continue monitoring to help ensure compliance. Here, the term "AI" is used at a high level and interchangeably with machine learning and deep learning, in regard to technology that uses data, regardless of form and quantity, to improve performance. Summarized below are some key considerations for building a responsible AI framework.
1 – Accountability
The more complex the program, the more difficult it might be to
trace a direct line from the program to the result. However, for
compliance purposes, funds must be able to maintain and demonstrate
sufficient control over AI decisions. This is important under U.S.
securities laws and regulations, to show the fund is properly
executing client instructions. Additionally, under the General Data
Protection Regulation (GDPR) in the EU, funds must be able to
explain to customers what data is being collected and exactly how
their data is used, and this requires explaining how any automated
solutions work. Finally, funds should be able to trace AI decisions
to ensure these decisions align with the fund's own objectives.
This includes documenting all stages of AI solutions, including
testing and approvals, training, monitoring and maintenance.
2 – Bias
We are all familiar with the concept of "garbage in, garbage
out." Feeding algorithms with incomplete or incorrect data is
the primary cause of erroneous AI outputs. Risk managers across all
industries are therefore increasingly concerned about the
unintended bias of AI, such as when data sources are incomplete or
contain unintentionally biased information. Rigorous testing
processes and controls in the data, model and human use of AI can
help ensure data integrity and reduce the risk of unintended biases
and errors.
3 – Transparency
AI is sometimes referred to as a "black box" because the
connection between inputs and outputs is not always clear and the
inner workings are difficult to understand. This poses the risk of
discrepancy between the program's code and the resulting
transactions. The "black box" problem may also create the
perception of a lack of transparency for asset managers and
customers alike – e.g., if they don't know how AI comes
up with its decisions, they may not trust it. And managers may not
be able to explain to regulators how decisions are being made in
complex AI transactions that contain hidden decision-making layers.
Adding transparency and explainability to the modelling process is
key to any responsible AI framework.
4 – Data Integrity
As mentioned, AI solutions are programmed to process data
differently, by refining the way decisions are made over time. The
corollary of this process is that AI decisions are only as good as
the inputs. The first results are likely of lesser quality than
subsequent ones. Second, if data continues to be limited or
inaccessible after the first decisions are made, the results might
not improve at the expected rate. Simply put, AI's
effectiveness depends on the availability of sufficient,
high-quality data.
5 – Governance
The fast pace at which technology evolves requires corporate
governance to stay knowledgeable and abreast of the technology. It
is important to design effective AI operating models and processes
to improve accountability, transparency and quality. Regulators
will expect funds to have in place robust and effective governance
and controls, including a risk management framework (RMF) to
identify, assess, control and monitor risks associated with each AI
application. In this regard, AI may also accelerate the RMF
lifecycle and impact risk appetite statements.
6 – Errors
With the speed at which AI evolves comes the potential for
magnification of errors. Since AI is designed to "learn"
from inputs, any error that occurs early in the program's
execution might quickly turn into a large scale problem. Updates
should be executed as smoothly as possible, while minimizing any
risks to upcoming, pending or completed transactions. Still,
glitches or "bugs" might occur when AI is upgraded, and
there might also be issues with the accessibility or use of legacy
data. Funds should have processes and procedures in place to
document and identify faulty logic or reasoning, as well as
remediation protocols.
7 – Security
Increased dependency on AI may introduce additional security
vulnerabilities. Funds should have in place appropriate procedures
for rigorous validation, continuous monitoring, verification and
"adversarial" testing. Limiting access to AI systems to
appropriate personnel may help prevent manipulation and
exploitation of data.
8 – Cost
Although AI solutions were adopted by funds in order to drive cost
efficiencies, they might not always be the most cost-efficient
solution once other risks have been assessed. This depends on the
degree of personalization that is required by the fund, and the
cost at which it is offered. AI solutions also require skilled
technical staff to design, maintain and run the systems.
Additionally, what works in one sector might be very different in
another sector. For example, a recent survey conducted by the
European Financial Management Association in partnership with
Deloitte shows that the banking and insurance sectors assess the
impact of AI differently. Although AI is an attractive customer
service option for banks, it is less appealing for insurance
companies, perhaps due to the different level of engagement
required for certain transactions. Conversely, AI might provide a
better solution for back office/operations in the insurance sector
than in the banking sector.
9 – Insurance
Funds and other organizations increasingly manage the risks posed
by their AI solutions with appropriate insurance coverage. Although
new products are being created to meet increasing demand, insurers
occasionally perceive AI as a risk multiplier. In addition to
creating risks of its own, AI changes how insurers and funds alike
analyze risks they have already identified.
AI must be used responsibly. Although funds are increasingly considering AI solutions, they should look carefully at how AI impacts existing risks and creates new potential legal and compliance risks for their organization. We expect regulators to closely monitor AI applications and continue to be aggressive in bringing enforcement actions against firms for the misuse of AI. Building a responsible AI framework will help funds maintain compliance with legal and regulatory requirements, while continuing to provide high levels of customer service in a cost-efficient manner.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.