- within Technology topic(s)
- in European Union
- in European Union
- in European Union
- in European Union
- in European Union
- in European Union
- in European Union
- in European Union
- in European Union
- with readers working within the Aerospace & Defence, Banking & Credit and Business & Consumer Services industries
- within Technology and Finance and Banking topic(s)
Switzerland’s sector-specific, technology-neutral approach to AI – from FINMA Guidance 08/2024 to the Council of Europe AI Convention – mapped against the operational reality of Swiss FinTech.
In a nutshell. Switzerland has, on 12 February 2025, formally rejected the idea of a horizontal Swiss “AI Act” in favour of a sector-specific, technology-neutral approach anchored in existing law and the Council of Europe Framework Convention on AI. For the financial industry, FINMA’s Guidance 08/2024 of 18 December 2024 is the practical reference point: it does not create new substantive law, but operationalises existing governance, risk management and outsourcing duties for AI-driven processes. The result is a regulatory environment that is conceptually permissive yet supervisorily demanding – a combination that rewards FinTech firms with mature governance and exposes those without.
|
Key Takeaways
|
I. Introduction
Artificial intelligence has moved from a peripheral efficiency tool to a core production factor of Swiss financial services. Robo-advisory, credit scoring, transaction monitoring, fraud detection, algorithmic execution and – most recently – generative AI for client communication and document review are now embedded in the daily operations of Swiss banks, asset managers and licensed FinTech firms. According to FINMA’s survey of around 400 supervised institutions published on 24 April 2025, roughly half are already using AI, with a further quarter intending to do so within three years; on average, respondents have five applications in production and nine in development.
This rapid adoption is taking place in a jurisdiction that has deliberately chosen not to legislate a horizontal AI statute. Switzerland’s position is instead set by three layers of regulation: the existing technology-neutral financial market acts, the FINMA Guidance 08/2024 on governance and risk management when using AI, and – prospectively – the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (“CoE AI Convention”), which Switzerland signed on 27 March 2025. The European Union’s AI Act (Regulation (EU) 2024/1689) operates as an extraterritorial benchmark for Swiss firms doing cross-border business.
This contribution maps that framework, identifies the legal pressure points that arise where it meets real-world FinTech architectures, and offers practical observations for boards, general counsel and compliance officers responsible for AI deployment in Switzerland.
II. The Swiss FinTech Market and AI Use Cases
Switzerland remains one of the leading European FinTech hubs, supported by a deep banking ecosystem, the Crypto Valley cluster around Zug, and a graduated regulatory architecture that includes (i) the unsupervised “sandbox” for public deposits up to CHF 1 million (Art. 6 para. 2 lit. a–c BankV), (ii) the FinTech licence under Art. 1b BankG for deposit-taking up to CHF 100 million without interest margin business, and (iii) the full banking, securities-firm or financial-institution licences under BankG and FinIA for established players.
Across this landscape, the most material AI use cases from a regulatory perspective are:
- Robo-advisory and portfolio management – raising suitability and appropriateness duties under Art. 10–14 FinSA, plus the risk-disclosure and information regime in Art. 7 ff. FinSA;
- Credit scoring and creditworthiness assessments – raising consumer-credit, data-protection and non-discrimination concerns; under the EU AI Act these systems are explicitly classified as high-risk (Annex III);
- AML transaction monitoring and KYC – increasingly carried out by AI tools in the context of Art. 6 GwG and Art. 20 AMLO-FINMA;
- Fraud detection – typically pattern-recognition models operating on payment, behavioural and device data;
- Algorithmic and high-frequency trading – falling within FINMA Circular 2018/1 “Organised trading facilities” and the market-integrity regime of FinMIA;
- Generative AI for internal use – client correspondence drafting, knowledge management and code generation, where the FINMA April 2025 survey found that 91% of AI-using institutions already deploy generative tools.
Each of these use cases activates a different combination of duties under the Swiss financial market acts, the revised Federal Act on Data Protection (revFADP / nDSG) and the Anti-Money Laundering Act (GwG). The regulator’s position is that the choice of technology does not alter substantive obligations – a point of crucial practical importance.
III. The Regulatory Framework
1. No “Swiss AI Act”: the sector-specific, technology-neutral approach
On 12 February 2025, the Federal Council formally adopted the regulatory architecture for AI in Switzerland. Three points are decisive for FinTech operators. First, no horizontal AI statute will be proposed; existing sectoral legislation – financial market law, data protection law, product safety, competition law – will be amended only where strictly necessary. Second, the CoE AI Convention will be ratified and transposed, principally for state actors and cross-cutting fundamental-rights areas (data protection, non-discrimination, transparency, oversight); a consultation draft is expected by the end of 2026. Third, soft-law instruments (industry self-declarations, codes of conduct) are explicitly contemplated as complementary tools.
This approach sits squarely within the Swiss tradition of principles-based, technology-neutral financial regulation. The Federal Supreme Court has confirmed that the same substantive thresholds and duties apply regardless of whether a process is performed by humans or algorithms (see ATF 141 II 103, on volume thresholds for securities dealers). Translated into AI terms, FINMA’s mantra – “same business, same risks, same rules” – means that an AI-driven KYC engine triggers the same obligations as a human compliance officer, even though the residual risks may be technologically very different.
2. Financial market regulation: FINMA Guidance 08/2024
FINMA published its Guidance 08/2024 on Governance and Risk Management when using Artificial Intelligence on 18 December 2024. The Guidance is not a legally binding circular and does not create new duties. Rather, it consolidates how FINMA expects supervised institutions to satisfy existing organisational duties – in particular under Art. 3 para. 2 lit. a BankG, Art. 9 FinIA and the relevant FINMA circulars on operational risk, governance and outsourcing – when AI is part of the production chain.
FINMA structures its expectations along seven dimensions:
- Governance and accountability: clear allocation of responsibility at individual (not merely committee) level for each material AI application;
- Inventory and risk classification: a complete inventory of AI use cases with a calibrated risk assessment, distinguishing materially regulatory-relevant from non-material applications;
- Data quality: documented sourcing, completeness, representativeness and currency of training and inference data;
- Tests and continuous monitoring: pre-deployment validation as well as ongoing performance, drift and bias monitoring;
- Documentation: technical documentation of model design, assumptions, limitations and material changes – a particular challenge for continuously learning systems, where Art. 15 FINMA-OS imposes specific documentation duties on internal models in insurance;
- Explainability: model outputs must be sufficiently understandable both to staff using them and, where relevant, to clients and supervisors;
- Independent review: model validation by parties independent of model developers, proportionate to materiality.
The Guidance applies across the supervised universe – banks, securities firms, insurers, fund management companies, managers of collective assets, financial market infrastructures – and is being operationalised through ongoing supervisory dialogue rather than through enforcement action against the rules as such. FINMA has, however, indicated that institutions should engage early with the regulator before deploying AI in critical processes or for the calculation of regulatory parameters.
3. Data protection law: the revFADP and the GDPR comparison
The revised Federal Act on Data Protection (revFADP / nDSG), in force since 1 September 2023, is the second main pillar for FinTech AI deployments. Three provisions are particularly relevant:
- Profiling and high-risk profiling (Art. 5 lit. f and lit. g revFADP): high-risk profiling – e.g. AI-driven creditworthiness or risk scoring producing a substantial assessment of essential aspects of a person – triggers heightened consent, transparency and protection requirements;
- Automated individual decisions (Art. 21 revFADP): where a decision producing legal effects or significantly affecting the person is taken purely automatically, the data subject must be informed and is, in principle, entitled to request review by a natural person (cf. Art. 22 GDPR);
- Data minimisation and purpose limitation (Art. 6 revFADP): structural constraints on the volume of training and inference data, and on its repurposing.
While the revFADP is broadly aligned with the GDPR, important differences remain. The revFADP does not impose administrative fines on legal entities (only criminal fines on natural persons up to CHF 250’000), and the data protection impact assessment under Art. 22 revFADP is conceptually similar to but procedurally lighter than its GDPR counterpart. Swiss FinTech firms with EU-resident clients must, however, comply with both regimes; for many, the GDPR remains the binding upper bound.
4. Anti-money laundering: AI in KYC and transaction monitoring
AI is increasingly deployed to fulfil the AML monitoring and reporting duties in Art. 6, 9 and 21 GwG, and the operational duties in Art. 20 AMLO-FINMA on the monitoring of business relationships and transactions. FINMA explicitly accepts AI-supported monitoring, provided that financial intermediaries retain the ability to substantiate, on a case-by-case basis, why a transaction was or was not flagged. Two practical issues recur:
- False negatives and bias: a model that systematically under-detects certain typologies (or, conversely, over-flags certain client segments) creates simultaneous AML and non-discrimination exposure. Independent model validation, calibration to Swiss typologies (rather than imported foreign templates) and human review of edge cases are essential;
- Simplified due diligence (Art. 7a GwG): low-risk automated payment systems may, under specified conditions, benefit from simplified due diligence. AI tools used to keep transactions within those parameters must be auditable; if they fail, the financial intermediary cannot retroactively claim the simplified regime.
Reporting duties under Art. 9 and Art. 37 GwG remain personal duties of the financial intermediary; an AI tool can support, but cannot discharge, them.
5. The international dimension: CoE AI Convention and EU AI Act
Switzerland chaired the negotiations of the CoE AI Convention adopted on 17 May 2024 and signed it in March 2025. The Convention obliges signatories to ensure transparency, accountability, human oversight and risk management throughout the AI lifecycle; it applies in full to public-sector actors and obliges States Parties to implement appropriate measures vis-à-vis private actors. For Swiss FinTech, the practical impact will materialise through implementing legislation, expected for consultation by the end of 2026.
The EU AI Act has more immediate cross-border bite. AI systems used to evaluate the creditworthiness of natural persons are listed in Annex III as high-risk; this triggers the regime of Art. 9–15 of the AI Act on risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy/robustness. Swiss FinTech firms providing services into the EU – either directly or through EU-domiciled intermediaries – must assume that EU AI Act compliance, in addition to FINMA expectations, defines the de facto upper bound of their governance design. FINMA Guidance 08/2024 was deliberately drafted with an outlook towards the EU framework, and many of its expectations track – in substance if not in form – the obligations of Art. 9 ff. AI Act.
IV. Key Legal Challenges
1. Transparency and explainability
The “black box” problem is not merely a technical inconvenience but a regulatory exposure. The combination of FINMA’s explainability expectation, Art. 21 revFADP’s right to human review, the FinSA information duties (Art. 7 ff. FinSA) and the EU AI Act’s human-oversight requirement converges on a single practical conclusion: a Swiss FinTech firm must be able to articulate, in plain language, why an AI system reached a given outcome in a given case. Continuously learning models pose a particular challenge here, since their decision logic at time T may differ materially from time T-1; this is precisely the constellation that Art. 15 FINMA-OS addresses for internal insurance models, requiring that material modifications be unambiguously identified and briefly explained.
2. Liability allocation
AI does not have legal personality in Swiss law. Liability for AI-driven decisions therefore falls into the existing matrix of contractual liability (Art. 97 ff. CO), tort liability (Art. 41 ff. CO), product liability (Federal Product Liability Act) and – for licensed institutions – supervisory liability of the institution and, in some constellations, of individuals acting as compliance officers or factual organs. Two distinctions matter in practice:
- Provider versus user. The technology provider may be liable to the financial institution under contract for defects, breaches of warranty or non-conformity; the financial institution remains liable to the client and the regulator regardless. FINMA’s position – mirrored in the Swiss Federal Council’s sectoral approach – is that AI does not relieve the regulated entity of any duty;
- Institution versus individual. Personal supervisory liability of compliance officers and senior managers continues to apply where AI failures are traceable to inadequate governance; this is a recurring theme of FINMA enforcement practice.
Drafting AI procurement and SaaS contracts therefore deserves particular attention: clauses on training data warranties, model lineage, audit rights, security incident notification, sub-processor disclosure, model-change notification, exit and reversibility are now standard items in FinTech procurement.
3. Discrimination and bias
Swiss law does not contain a general private-sector anti-discrimination statute equivalent to EU directives, but the constitutional principle of equality (Art. 8 BV), the personality protection regime in Art. 28 CC, the GwG, the FinSA suitability rules and Art. 5 lit. g revFADP on high-risk profiling combine to create an effective non-discrimination perimeter for AI in financial services. Credit scoring models that systematically disadvantage protected groups, KYC tools that produce skewed risk profiles, or fraud-detection systems that disproportionately flag certain demographics generate exposure on multiple fronts at once.
4. Outsourcing and cloud
FINMA Circular 2018/3 (“Outsourcing – banks and insurers”) sets the framework for any externalisation of significant functions – and the bulk of AI use cases in Swiss FinTech today involve at least some degree of cloud or third-party AI. The 2024 FINMA Risk Monitor and the April 2025 survey both flag the increasing dependence of financial institutions on a small number of BigTech providers as a structural source of operational risk. From a legal perspective, the recurring questions are: maintenance of an inventory of significant outsourcing relationships, location of data and processing (with cross-border data transfer implications under Art. 16 ff. revFADP), audit and inspection rights of FINMA and the institution, business-continuity planning, and exit strategies. The introduction of mandatory cyber-incident reporting for critical infrastructures on 1 April 2025 adds a further layer of obligation that AI-driven processes must accommodate.
5. Cybersecurity and model security
AI systems introduce model-specific cyber risks – prompt injection, model inversion, training-data poisoning, adversarial inputs – alongside the conventional IT risks. FINMA Circular 2023/1 on operational risks and resilience treats these as part of the institution’s broader IT and cyber-risk management duties, but boards should expect a sharper supervisory focus on model-specific threat modelling in the coming examination cycles.
V. Opportunities and Regulatory Trends
Notwithstanding the above, the Swiss environment is, in a comparative perspective, AI-friendly. The principles-based supervisory style, the absence of a prescriptive horizontal AI statute, the established sandbox and FinTech-licence categories and the willingness of FINMA to engage in early dialogue with innovators continue to make Switzerland a competitive home for FinTech AI deployment. The fact that FINMA Guidance 08/2024 sets outcomes rather than dictating specific tooling allows institutions to choose between in-house, hybrid and outsourced architectures based on commercial logic rather than regulatory choreography.
Three trends will shape the next two years. First, the implementation of the CoE AI Convention will translate transparency, non-discrimination and oversight principles into Swiss positive law, most likely through targeted amendments to the revFADP, the FinSA and possibly the GwG. Second, the EU AI Act will, by extraterritorial reach and by reputational pull, pull Swiss FinTech governance towards the high-risk regime irrespective of whether Switzerland formally adopts equivalent rules. Third, FINMA’s supervisory practice will mature into a more granular set of expectations on model documentation, validation and incident reporting, drawing on the data gathered through its 2025 survey.
VI. Recommendations
For Swiss FinTech firms, banks, asset managers and insurers deploying AI, the framework analysed above translates into a coherent set of recommendations across five levels of the organisation. The order matters: governance recommendations sit upstream of compliance and operational fixes, and Swiss recommendations sit alongside – not in isolation from – EU-facing ones.
A. For the Board and Senior Management
- Take direct ownership of AI governance. FINMA’s expectation of accountability “at individual (not committee) level” requires a designated senior officer – typically the COO, CRO or a Chief Data / AI Officer – explicitly mandated for AI strategy and risk, with structured reporting into a board committee.
- Ensure baseline AI literacy at board level. Effective challenge of management proposals requires directors to understand model risk, data lineage, bias, drift and explainability at the conceptual – not implementation – level.
- Integrate AI into strategic planning, M&A and procurement due diligence. Acquired or licensed AI assets carry inherited model, data and contractual risks that must be priced in before, not after, completion.
- Review the institution’s risk appetite statement. Material AI-driven processes should fall within the firm’s stated risk tolerances and not in undocumented grey zones.
B. For the General Counsel and Compliance
- Establish a complete AI inventory mapped against the BankG, FinIA, FinSA, GwG, the revFADP and – where relevant – the EU AI Act. This is the single most-leveraged compliance investment and the foundation of every subsequent step.
- Adopt a written AI policy aligned with FINMA Guidance 08/2024, integrated into – not parallel to – the existing risk management, ICS and outsourcing framework.
- Re-paper third-party AI and cloud contracts. Standard items now include training-data warranties, model lineage, audit and inspection rights (extending to FINMA), sub-processor disclosure, model-change notification, security-incident notification, business continuity, and exit / reversibility provisions.
- Conduct a Data Protection Impact Assessment under Art. 22 revFADP for every AI system involving high-risk profiling (Art. 5 lit. g revFADP) or automated individual decisions (Art. 21 revFADP), and document the human-review pathway.
- Develop AI-specific incident response playbooks covering model failure, data breach, regulatory inquiry and client complaint scenarios, with clear escalation thresholds and reporting timelines.
C. For Risk Management and Operations
- Implement continuous monitoring for drift, bias, accuracy and robustness, with documented thresholds for human escalation, model retraining and rollback.
- Establish an independent model validation function, proportionate to materiality, that is structurally separate from model development and deployment.
- Document every material model change, training-data update and architectural modification, drawing on the documentation logic of Art. 15 FINMA-OS for internal insurance models.
- Build human-in-the-loop checkpoints into every AI-driven process producing legal effects on, or significantly affecting, clients (Art. 21 revFADP), with documented evidence that the human review is meaningful and not merely formal.
- Stress-test cyber and operational resilience scenarios specific to AI: prompt injection, model inversion, training-data poisoning, adversarial inputs and vendor outage.
D. For Cross-Border and EU-Facing Operations
- Treat the EU AI Act high-risk regime as the operational upper bound. For Swiss firms with EU clients, counterparties or branches, this is no longer aspirational – it is the de facto design standard.
- Map the institution’s AI footprint against the EU AI Act risk classification, including the Annex III high-risk categories (creditworthiness, employment, essential services).
- Track the CoE AI Convention implementation timeline. Consultation by the end of 2026; expect targeted legislative amendments primarily in the revFADP, FinSA and GwG.
- Coordinate Swiss and EU regulatory dialogue. FINMA early engagement and EU-side notified-body / supervisory-authority interaction should be deliberately aligned, not run on parallel tracks with inconsistent positions.
E. Engagement with FINMA
- Engage FINMA proactively, not reactively. The April 2025 survey communication explicitly invites institutions to contact FINMA in good time before deploying AI in critical processes or for the calculation of regulatory parameters; early dialogue de-risks supervisory surprises.
- Use the supervisory dialogue to clarify ambiguity. Where the principles-based architecture leaves room for interpretation, a documented FINMA conversation is the most valuable evidence of good-faith compliance.
- Anticipate the next supervisory cycle. Following the 2025 survey, FINMA’s second-generation expectations on AI are likely to crystallise into a follow-up Guidance or Circular; institutions that have built proper inventories, validation pipelines and documentation will absorb those changes incrementally.
VII. Outlook
Switzerland’s deliberate decision against a horizontal AI statute should not be mistaken for regulatory laxity. By relying on existing financial market, data protection and AML legislation, on the supervisory authority of FINMA, and on the prospective implementation of the CoE AI Convention, the Federal Council has chosen a regime in which substantive expectations are stable but the practical bar moves with technology. For sophisticated FinTech operators, this is an advantage: it leaves room to design governance frameworks proportionate to actual risk rather than prescribed templates. For less mature operators, it creates an asymmetry: the absence of a checklist is not the absence of a duty, and the burden of proving adequate governance rests with the institution.
By the end of 2026, the consultation draft implementing the CoE AI Convention is expected, the EU AI Act’s high-risk regime will be in full operational application, and FINMA’s second-generation supervisory expectations on AI will likely have crystallised into a follow-up Guidance or Circular. Swiss FinTech firms that have used the intervening period to mature their AI governance – inventory, documentation, validation, explainability, human oversight, contractual architecture – will absorb those changes incrementally. Those that have not, will not.
VIII. Conclusion
Switzerland’s AI-in-FinTech regime is a textbook expression of the Swiss regulatory style: principles before rules, supervision before legislation, technology-neutrality before technology-specific prescription, international alignment before national exceptionalism. The combination of FINMA Guidance 08/2024, the revFADP, the GwG, the FinSA / FinIA, the CoE AI Convention and the gravitational pull of the EU AI Act provides a coherent – if non-codified – framework. The challenge for boards and general counsel is no longer to ask whether AI is permitted, but to demonstrate that the AI in production has been designed, deployed, documented and monitored to a standard that the regulator, the client and ultimately a court would recognise as adequate. That standard is, by design, a moving target. The institutions that internalise this will define what “good” looks like in Swiss FinTech AI for the next decade.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.