Your consumers are ready for conversational commerce. People are comfortable asking an AI chatbot to help them find products, compare options, and navigate a purchase. That’s a real behavioral shift, and it’s not reversing.

But the real debate right now isn’t whether AI belongs in retail. It’s where it sits.

The last year has made one thing clear: external AI platforms struggle to deliver a genuinely good shopping experience on their own. Why? Because they’re flying blind. They don’t have access to your rich product catalog, inventory data, and customer history. But native brand AI, fueled by that proprietary data, can do significantly better. And it’s not just about the experience — going native is also a business decision. It’s how you can keep the relationship with the customer in your own hands.

There’s a tendency to treat this as binary: either you let customers shop through ChatGPT or Google’s agents, or you build something yourself. That’s the wrong way to look at it. Using external AI platforms as a referral and discovery layer, the way you use Google search today,  makes complete sense. Getting that referral traffic is great. But handing over the relationship with your customer is the part worth thinking carefully about.

Here’s the thing: the LLM technology powering those AI chatbots is the same technology you can deploy natively inside your own ecosystem. It’s also the direction we’ve been moving at Riskified — expanding our platform specifically to support merchants building their own AI agents. You don’t have to choose between a great conversational experience and owning the customer relationship.

That relationship is the business. Every transaction flowing through an external agent is an interaction you didn’t capture: no first-party data, no path to upsell, no ability to drive repeat purchases. You’re building someone else’s loyalty loop, not yours.

Where native AI actually earns its keep

Retailers are starting to deploy agentic AI across the customer journey, from discovery to checkout to post-purchase support. Discovery is not easy to do well, but merchants generally have the data they need to solve it (catalog, inventory, customer browse history). 

Post-checkout is a totally different problem.

Returns, refunds, claims, shipping issues are genuinely difficult decisions for your team. A customer contacting support about a missing order is likely in an emotional state. How you handle it will directly determine whether they come back. Get it right and you’ve earned real loyalty. Get it wrong by being too slow, too rigid, or dismissive, and you’ve probably lost them. At the same time, every decision to issue a refund or approve a return has a direct cost. The same interaction is simultaneously your biggest service opportunity and your biggest exposure to abuse.

We’ve already seen this work in practice. Rue Gilt Groupe recently integrated our identity intelligence directly into their customer service workflows, giving human agents a real-time risk score at the moment of every refund or high-risk request. The result: trusted customers get instant resolution, serial abusers get friction. As AI agents take over more of these interactions, the same intelligence needs to be there, in real time, at the same moment.

An AI agent operating autonomously in that environment, without the right context, is dangerous in both directions. Too generous and you’re a target for policy abuse at scale. Too cautious and you’re insulting customers who deserved better.

The information problem

How do you make the right call during these high-stakes moments? This is when having the right identity intelligence is critical. When your AI agent is talking to someone, it needs to know: Is this a loyal customer who deserves an immediate refund, or is this someone with a history of exploiting policies? That is a question no single merchant can answer on their own.

At Riskified, we’ve spent years building something no individual retailer can replicate: a cross-merchant view of how abuse actually happens. Because we sit at the center of a network reviewing hundreds of billions in GMV for over a dozen years, we’ve seen the same identities surface across hundreds of merchants. We understand the behavioral patterns around returns and refunds at a massive scale.

And we aren’t just looking at clean data. Our systems are designed to understand messy CRM and transaction records where no single reliable shared key exists. We use advanced machine learning and probabilistic methods to connect the dots between accounts — even when abusers are actively trying to hide those connections, fly under the radar, or look like new users. That is a capability that expands on what any internal data team can build.

The work we’re doing now feeds that intelligence directly to AI agents precisely when they need it. It’s a real-time signal during a support interaction that doesn’t just ask, “Is this fraud?” It fundamentally answers, “Who am I actually talking to right now?” – which then dictates, “How should I treat this person?”

The truth is, most agentic commerce deployments have the AI part figured out. The agent can hold a conversation, it can take action. What it can’t do on its own is know who it’s actually talking to. That’s the missing layer — and it’s the one that matters most.

Contact us to learn how Riskified’s solutions can empower you to thrive in the era of agentic commerce.