Imagine a user shopping on ChatGPT. She types a one-sentence prompt: “Find me five pairs of vintage sneakers, around $100 each, that I can give as gifts to my friends.” ChatGPT responds with a few clarifying questions, queries a merchant catalog via its Agentic Commerce Protocol, and the user gets to check out without ever leaving the chat interface.

Fast forward three weeks. The cardholder whose card was used in that transaction calls their bank: “Hey, I never bought these sneakers.” What happened? Perhaps a good customer’s OpenAI account was compromised. Perhaps a fraudster opened a new account and used this stolen card, the credentials of which they purchased on the dark web. Perhaps the cardholder did place this order but is now regretting it and trying to obtain a refund.

Regardless of the attack vector, the merchant is being put in an unfair situation. The payment token they received was valid, so they naturally approved the order. But in many cases, they have financial liability, even though the customer never even visited their store. Merchants, not LLMs, have to reimburse banks for chargebacks. Sure – some customers will use alternative payment methods like Apple Pay or Google Pay, where the digital wallet provider takes on chargeback liability. But in these cases, the merchant still absorbs the indirect costs of any fraud, including customer support costs, refund abuse, restocking costs, and declining authorization rates.

According to a recent Riskified survey, shoppers are embracing AI assistants like ChatGPT for product ideas (45%), to summarize reviews (37%), and to compare prices (32%). This trend creates pressure for merchants to opt into ChatGPT’s ACP, and if they don’t, they stand to lose volume to their competitors who are participating. Industry leaders warn that merchants face significant disadvantages, including reduced website traffic, loss of valuable customer behavioral insights, and diminished ability to influence purchasing decisions through strategic merchandising and design. Companies are essentially reduced to feeding their product catalogs to ChatGPT while having minimal control over whether the algorithm recommends their products to consumers.

Another hard truth about agentic commerce: after years of fraud teams painstakingly derisking their business, this new shopping flow threatens to reintroduce the very risks they worked so hard to eliminate. The ACP passes only the basic data about the transaction to the merchant. This new flow strips away all data that fraud teams rely on to make informed decisions — while leaving the merchant on the hook for the same financial liability.

And now, scale that scenario. Today’s organized fraud rings are well-resourced, automated, and adept at exploiting weak points in identity authentication, account takeover, and payment flows. They are able to operationalize this attack and multiply it exponentially. Just look at this photo showing a wall of 1,250 smartphones running scams at one facility in Cambodia to get a sense of the scale. What was designed as a seamless LLM-powered customer experience could become a new mass-exploitation vector, enabling attackers to weaponize AI-driven checkouts across multiple merchants at once. The result: a surge in disputes, soaring chargeback rates, and a merchant community suddenly forced to absorb an asymmetric cost of doing business in an agent-enabled world.

Policy abuse is a particular concern

Unauthorized payment use is one problem. But perhaps the most serious threat posed by this new shopping flow is various forms of policy abuse. Resellers can exploit the system by creating numerous GPT-powered accounts to purchase large quantities of merchandise with the intent to resell them. While not technically fraud, this practice undermines a merchant’s control over their brand and inventory. As with CNP fraud, merchants will struggle to identify these resellers because they have such limited data about the buyer’s identity. 

Such threat vectors exist not only at the point of purchase but also after a purchase. Merchants may face a surge in claims tickets because they lose the ability to identify and decline orders from abusive customers at checkout. Compounding the issue, many of these claims may come from legitimate customers experiencing buyer’s remorse, driven by the new, quicker, and potentially experimental shopping flow. This combination of factors creates a perfect storm of abuse, leaving merchants with fewer tools to protect their businesses.

What fraud teams can do today

This space is developing extremely fast. OpenAI’s Agentic Commerce Protocol represents the next evolution of the hybrid human-agent shopping flow, but is unlikely to be the final state. There are a few steps that fraud teams should consider to best function in an ACP world and to future-proof themselves against whatever comes next. 

  • Explain the risks to your organization. Most of the org chart will see partnering with OpenAI as an exciting, win-win opportunity. Only the fraud team is likely to understand the tradeoff it will likely mean for the store, in terms of taking on risk. It’s best to be pre-emptive – don’t wait for chargebacks to spike and for abusers to overwhelm your CX team’s queue. Your executives need to thoroughly grasp the potential rewards and risks involved.
  • Have your C-suite advocate for data sharing. As mentioned above, game theory compels every store to integrate with the ACP – as risky as it may be to join, it’s even riskier to sit on the sidelines. But OpenAI needs to understand that for these partnerships with ecommerce merchants to be sustainable, they will need to pass information about the end user’s IP address, behavior, device, and much more, to help the merchant make an informed decision about whether to accept the order or not. Failure to do so risks amplifying fraud to the point where agentic commerce becomes a net loss for everyone involved.
  • Share the responsibility for fraud prevention: In a world where merchants see less data on each order, the smartest move is to partner with AI-driven fraud intelligence platforms that guarantee their decisions. These third-party providers can offset the loss of visibility by tapping into their vast network data — connecting a single data point, like an email or shipping address, to dozens or even hundreds of transactions across merchants. This shared intelligence restores context that individual merchants no longer have, and helps distribute the burden of fraud prevention more equitably across the ecosystem.

The promise of agentic commerce is enormous: more seamless experiences, faster discovery, and higher conversion. But a promise without prudence is peril. Left unchecked, agentic commerce risks creating a commerce layer that amplifies bad actors and leaves those who actually run the businesses — small brands, enterprise merchants, and the customer service teams that clean up the mess — to pay the price.

Ready to embrace the future of ecommerce with confidence?

Through strategic partnerships, innovative technology, and enhanced infrastructure, Riskified is empowering merchants to adapt securely and efficiently to the future of agentic commerce. 

Our solutions interweave a security framework built on the foundation of Riskified’s expertise in fraud prevention and ecommerce enablement, delivering precise decisions and improved business performance for merchants.

Contact us to learn how Riskified’s solutions can help you innovate securely and thrive in the era of agentic commerce.