ARTICLE
27 January 2026

AI Watch | January 2026

BL
Borden Ladner Gervais LLP

Contributor

BLG is a leading, national, full-service Canadian law firm focusing on business law, commercial litigation, and intellectual property solutions for our clients. BLG is one of the country’s largest law firms with more than 750 lawyers, intellectual property agents and other professionals in five cities across Canada.
Stay on top of current issues and advances in artificial intelligence that are transforming the banking and financial services sector. Browse the latest legislative developments and emerging case law in the field.
Worldwide Technology
Hélène Deschamps Marquis’s articles from Borden Ladner Gervais LLP are most popular:
  • with Finance and Tax Executives
  • in Canada
  • with readers working within the Accounting & Consultancy, Banking & Credit and Law Firm industries

Stay on top of current issues and advances in artificial intelligence that are transforming the banking and financial services sector. Browse the latest legislative developments and emerging case law in the field. Discover focused updates relevant for you as well as broader-scale developments that are also worth a look. Our AI Watch newsletter is published periodically.

In this edition:

  • Canada
    • Canada leads the G7 Industry, Digital and Technology Ministers' Meeting
    • Canada and the EU sign a Memorandum of Understanding on Artificial Intelligence
    • FCAC report identifies key risks and opportunities regarding AI adoption in the financial sector
  • United States
    • President Trump officially signs Executive Order to establish a national framework for AI regulation
  • European Union / United Kingdom
    • The EU publishes its Code of Practice on Transparency of AI-Generated Content
  • News in brief

Canada

Canada leads the G7 Industry, Digital and Technology Ministers' Meeting

Canada's presidency over the G7 is now over but Canada led one last time at the G7 Industry, Digital and Technology Ministers' Meeting, which culminated in a series of commitments on AI, digital infrastructure, and industrial resilience.

Canada and the EU sign a Memorandum of Understanding on Artificial Intelligence

Following the first Digital Partnership Council meeting, the EU and Canada have strengthened their digital partnership by signing a Memorandum of Understanding, thereby setting an ambitious roadmap for collaboration on AI. The two partners reaffirmed their shared commitment to competitiveness, digital sovereignty, and support for SMEs.

  • The EU and Canada shared best practices on standards, regulation, skills development, and adoption, particularly in strategic sectors such as health, energy, manufacturing, public services, and climate‑related research.
  • Both sides committed to exploring the facilitation of mutual recognition of conformity assessments for high-risk AI systems in line with the EU AI Act.
  • Although some argue that this signals a change in Canada's regulatory approach towards AI, Minister Solomon insists that he still does not intend on over-regulating AI.

FCAC report identifies key risks and opportunities regarding AI adoption in the financial sector

The Financial Consumer Agency of Canada (FCAC) and the Global Risk Institute held the fourth and final Financial Industry Forum on Artificial Intelligence (FIFAI II) workshop. These meetings bring together experts from academia, regulators, banks, insurers, pension plans, fintechs, and research centres to discuss safeguards and risk management in the use of AI by financial institutions. Since July 2025, three FIFAI II workshops were held on security and cybersecurity, financial crime, and AI and financial stability. This fourth workshop gathered over 55 thought leaders to examine how AI is reshaping consumer protection and financial well‑being. The resulting report explores both the potential of AI through increased access, personalization, and financial inclusion, in addition to the emerging risks tied to bias, fraud, digital exclusion, and declining transparency.

  • According to the report, the lack of transparency, explainability, and accountability when financial institutions do not properly disclose and explain the use of AI tools to consumers, as well as the loss of trust between institutions and consumers where the data is of poor quality and contains biases, constitute risks for AI adoption in the financial sector.
  • Participants identified three key principles to mitigate these issues and to safeguard consumers: the inclusion by design of consumer interests in the development of AI products from the outset, the support of innovation and resilience in the financial sector to reduce costs and lower prices for consumers, and the promotion of AI literacy amongst the public to advance access, inclusion, and consumer protection.
  • These findings demonstrate that while regulators are aware of the opportunities AI can bring to the financial sector, they will prioritize the adoption of responsible AI.

United States

President Trump officially signs Executive Order to establish a national framework for AI regulation

After a leaked document revealed President Trump's intentions, the White House officially issued an Executive Order to prevent U.S. states from imposing AI regulations that conflict with federal priorities, arguing that a patchwork of state laws threatens innovation, interstate commerce, and America's global AI leadership. The order signals a shift toward federal preemption and a unified national AI governance model, while pressing Congress to codify a minimally burdensome framework.

  • The order targets what it labels as "onerous and excessive" state requirements, such as laws that mandate model disclosure, alter AI outputs, or impose anti‑bias conditions, and directs the Department of Justice to challenge conflicting state statutes through a newly created AI Litigation Task Force.
  • The Secretary of Commerce is instructed to review problematic state AI laws within 90 days and may withhold funding to states that maintain conflicting regulatory regimes.

European Union / United Kingdom

The EU publishes its Code of Practice on Transparency of AI-Generated Content

The European Commission's AI Office launched its first draft of a Code of Practice on marking and labelling AI‑generated content to support compliance with the EU AI Act's transparency requirements. The Code was drafted by independent experts and guides providers and deployers of generative AI systems on how to mark, detect, and label synthetic and manipulated content, thereby reinforcing public trust and safeguarding the integrity of Europe's information ecosystem. The current version is high‑level and will be refined through continued stakeholder feedback.

  • Providers of generative AI must combine several machine‑readable techniques (such as metadata signatures, imperceptible watermarks, and, where necessary, logging or fingerprinting) to ensure AI‑generated content can be reliably detected across formats.
  • Providers must offer free‑of‑charge detection interfaces or a publicly available detector enabling users, platforms, and authorities to verify if content was AI‑generated. Detection results should be human-understandable.
  • Deployers must use a common taxonomy distinguishing fully AI‑generated from AI‑assisted content, and apply a common disclosure icon. The icon must be visible at first exposure and adapted to the content modality (video, image, audio, text), with special rules for creative or artistic works and accessibility accommodations (like audio cues or alt text).

News in brief

  • Canada has launched its first public AI Register to record where and how AI is used across federal institutions.
  • The Monetary Authority of Singapore published a consultation paper proposing sector‑wide, proportionate AI risk‑management guidelines for financial institutions, outlining expectations for governance, AI oversight, lifecycle controls, and capability building to ensure the responsible use of AI.
  • The South African Reserve Bank and the Financial Sector Conduct Authority published a joint report to better understand the opportunities and risks that the adoption of AI presents for South Africa's financial sector, and finds that banks are the primary adopters of AI, while insurance companies tend to take a more cautious approach.

About BLG

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More