- within Immigration topic(s)
- with readers working within the Business & Consumer Services, Consumer Industries and Healthcare industries
A New Normal
At this point, we've all become familiar with generative AI models. Most users today engage with AI models through a conversational interface: they type prompts, questions, or instructions, and the model replies. People treat these systems like a "smart assistant," not by handing off control, but by querying, iterating, refining its output, and guiding it with feedback. In this interaction mode, the human remains in the loop. The AI system is a collaborative tool, not an independent actor. "Agentic" AI seeks to expand on those capabilities by enabling agentic systems to take action on behalf of users, rather than simply providing information and analysis.
Imagine planning for a weekend trip, including making reservations for flights and hotels. Instead of spending time comparing airlines, flight times, loyalty points, and prices, an AI agent handles the entire process for you. The agent knows your travel preferences, loyalty memberships, budget, and preferred payment methods. You tell the AI agent, "Book me a trip to Miami for next weekend, under $700, at that hotel I stayed at last time. And while you're at it, order that fancy carry-on bag I was looking at the other day so I have it for the trip." The AI agent books your flights, reserves the hotel, and orders the bag to arrive at your house tout de suite. What used to require hours of research and dozens of clicks is executed via one command.
Enabling AI agents to handle not just research and recommendations but the execution of purchases themselves fundamentally alters commercial relationships and introduces new practical and legal questions for card issuers, merchants, acquirers, and consumers.
What if an AI Agent Makes a Mistake?
Ideally, your AI agent books the perfect trip. But what if it gets it wrong? You don't like the flight times, there are too many connections, or it books the wrong hotel. Heaven forbid, it ordered you the wrong fancy bag. In situations like these, dissatisfied consumers will seek to recover their losses through returns or chargebacks. This, in turn, creates reputational, regulatory, financial, and operational risks for card issuers, acquirers, and merchants.
Navigating Liability and Authorization Challenges in AI Agent-Initiated Commerce
If a consumer provides adequate notice to its financial institution, current laws such as the Truth in Lending Act and Regulation Z (for credit cards) and the Electronic Fund Transfer Act and Regulation E (for debit cards and other electronic transfers) limit a consumer's liability for unauthorized transactions. They require financial institutions to reimburse consumers for unauthorized transactions beyond modest statutory caps—typically $50 for credit cards and up to $500 for debit cards, though many issuers voluntarily provide broader protection. Separately, the card network rules provide "zero liability" protections for cardholders for unauthorized transactions, subject to certain conditions. The result is that issuers generally bear the upfront cost of reimbursing the cardholder; however, under network rules, they may shift that loss to the merchant's acquirer (and ultimately the merchant) through the chargeback process unless the merchant can produce sufficient evidence that the transaction was properly authorized and authenticated.
One of the questions now emerging is: how can a merchant prove proper authorization and authentication when an AI agent initiated the payment transaction, rather than the cardholder personally? Without a reliable, accepted proof-of-authorization model for AI agent-initiated commerce, merchants are likely to apply existing fraud controls such as SCA, 3DS, fraud scoring, and bot detection tools that block these transactions, limiting adoption and use of agentic AI technology.
Auditing Proof of Authorization
While courts have yet to apply common-law agency principles to autonomous AI systems, its frameworks still surface key questions. Under agency principles, an agent can bind a principal if the agent acts with actual authority—whether express ("buy two Bon Jovi tickets for next Saturday's 8:00 pm show, up to $400 each") or implied, which empowers the agent to exercise some discretion to complete the express task ("order my regular grocery order every Sunday"). When a dispute arises, a threshold question is whether the agent acted within the scope of the authority granted by the consumer.
Google's recently announced Agent Payments Protocol (AP2) provides a technical framework to record and prove what the user authorized. AP2 standardizes authorization parameters (for example, price limit, merchant preference, refundability) and creates a secure and traceable record of cryptographically signed "Mandates." An Intent Mandate records the user's specific instructions to the AI agent, and a Cart Mandate confirms the specific purchase either through the user's express approval or the agent's execution within that pre-defined scope. AP2 allows stakeholders to audit the authorizations and the resulting decisions to trace and identify errors.
Identifying a Trusted Agent
As noted above, faced with the prospect of material increases in agent-initiated transactions and the associated risk of an increase in chargebacks (without the ability to prove authorization or authentication), merchants are likely to use existing tools to block (or at least limit) AI agents' ability to interact with their sites.
Even assuming the adoption of authorization protocols such as Google's AP2, the next question would be: How can a merchant know if a particular AI agent is a good actor that supports appropriate authentication protocols or a bad actor that doesn't?
To meet this challenge, payment networks and other technology providers are developing solutions that help merchants and processors verify trusted agents, including Visa's Trusted Agent Framework, Mastercard's Agent Tokens, and Cloudflare's Web Bot Auth.
Visa's Trusted Agent Framework, Mastercard's Agent Pay, and Cloudflare's Web Bot Auth each represent early efforts to build an interoperable trust layer for autonomous digital agents. These initiatives share a common goal: to enable merchants and acquirers to differentiate between authorized, identity-bound agents acting on behalf of real consumers and unverified or malicious bots operating outside of established authentication protocols. Each approaches this problem slightly differently:
- Visa's framework extends existing network identity and tokenization standards to include "agent credentials" that can be cryptographically verified by acquirers and issuers during transaction authorization.
- Mastercard's Agent Tokens leverages the company's tokenization infrastructure to associate a specific AI agent with a consumer account and its corresponding authorization mandates.
- Cloudflare's Web Bot Auth functions at the network and application layer—using cryptographic attestations from verified agent platforms to signal that traffic originates from a known, trusted automation source.
Collectively, these protocols point toward a converging model in which merchants can recognize, authenticate, and accept payments from trusted AI agents while continuing to block unverified automation.
AI Agents Add a New Layer of Exposure to Sensitive Information
Trust in agentic systems also requires comfort that these systems securely handle the consumer data that makes delegation to agents possible. AI agents require deep access to consumer data such as financial profiles, transaction history, preferences, and payment information to make accurate purchasing decisions and act autonomously. In addition to compliance risks associated with international, federal, and state data privacy laws, this concentration of sensitive information in the agentic system heightens exposure to risks such as data leaks, misuse, or exploitation by bad actors. Though larger questions remain about how data privacy laws will apply to this technology, industry participants have also begun developing protocols to protect consumer and payment information, including Open AI and Stripe's Agentic Commerce Protocol and VGS' Agentic Toolkit. These solutions store or tokenize payment credentials and pass on only the minimal approved information that an agent needs to complete a purchase, reducing the risk that sensitive data is exposed.
Other Operational and Compliance Risks
Beyond core authorization and data concerns, agentic commerce will stress-test many of the day-to-day compliance and disclosure requirements that underpin today's payments ecosystem, including the ability of financial institutions, merchants, and other stakeholders to:
- document consumer consent to applicable terms and conditions,
- identify individual transactions on receipts and statements,
- manage recurring payments,
- address chargebacks or disputes over undelivered or unaccepted goods or services, and
- disclose add-on fees.
This is a non-exhaustive list of issues that parties involved in agentic AI–enabled transactions will need to evaluate as adoption increases.
Conclusion
As agentic commerce moves from concept to practice, stakeholders should prepare for both its opportunities and uncertainties. The ability of AI agents to transact autonomously will test longstanding legal, technical, and commercial frameworks in ways we haven't previously encountered. Addressing these issues will require coordinated innovation from all corners of the payments industry as well as those that regulate them.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.