Home IssueArtificial Intelligence Agentic Commerce is Coming, but Regulation Meant for Humans Will Slow It Down

Agentic Commerce is Coming, but Regulation Meant for Humans Will Slow It Down

by Eli Clemens
by

Agentic commerce is nearly here, and it is poised to transform shopping and procurement. Soon, artificial intelligence (AI) agents will be able to track thousands of listings simultaneously, monitor prices, and make purchases according to a buyer’s stated preference. Consumers will save time and money, businesses will optimize supply chains, and the economy will benefit from better-matched supply and demand—but only if regulation catches up with innovation.

The private sector has been collaborating to rapidly build the digital infrastructure that will support interoperable and secure agentic commerce. Industry leaders project that agentic commerce technology will be ready this year: Visa previously stated that “2025 will be the final year consumers shop and checkout alone.” Yet companies still operate in a state of regulatory uncertainty, and existing laws could slow agentic commerce.

The National Institute of Standards and Technology (NIST) plans to host a public-private conversation in April 2026 regarding standards for AI agents and barriers to adoption. Updating NIST standards and other regulations to remove barriers to adoption for AI agents would align with the first pillar of the Trump administration’s AI Action Plan, which calls for removing outdated regulations and streamlining infrastructure in order to “win the AI race.” The Office of Science and Technology Policy has correctly observed that many current rules “rest on assumptions about human-operated systems that are not appropriate for AI-enabled or AI-augmented systems.” However, for agentic commerce, neither the executive branch nor Congress has proposed changes to outdated rules, a gap that risks slowing consumer adoption and U.S. economic innovation.

On the consumer side, Regulation E, the federal rule that regulates consumers’ right to dispute erroneous electronic fund transfers, provides no clear framework for how to handle disputes in agentic commerce. The regulation states that consumers can authorize a payment using a “card, code, or other means,” which presumably would include a consumer sharing login credentials with an AI agent. But what would happen if an AI agent violated the consumer’s instructions, such as by autonomously ordering the wrong item or quantity of items? Or what if an AI agent were to purchase a product that a retailer mistakenly listed at an artificially high price—something a human would recognize as an obvious error, but an AI agent might not? As written, Regulation E would leave consumers without the dispute rights they rely upon for every other electronic transaction if contesting agentic purchases, regardless of where the mistake occurred.

The Consumer Financial Protection Bureau (CFPB), which enforces Regulation E, should update the rule to address agentic commerce. In August 2025, CFPB sought comment on an advance notice of proposed rulemaking on personal financial data rights regarding who can serve as a “representative” acting on a consumer’s behalf. Agentic commerce’s availability and adoption will depend on CFPB answering this question, as well as questions regarding Regulation E’s transaction authorization and dispute framework. Most importantly, CFPB should clarify that consumer-authorized agents do not waive all error resolution rights. Without regulatory clarity, businesses will build uncertainty into their pricing and product design. Unlike fintech innovations built on existing rails, agentic commerce is a genuinely new transaction model. The longer it scales without regulatory clarity, the more that costly workarounds will become the industry standard on which it is built.

On the enterprise side of agentic commerce, regulated industries like healthcare, food and beverage, and financial services operate under procurement requirements that assume a human authorized each transaction. Separately, public companies deploying AI procurement agents face uncertainty as the Securities and Exchange Commission (SEC) has offered little guidance on how existing compliance obligations apply. For example, the Sarbanes-Oxley Act’s (SOX) Section 302 requires company executives to personally certify they have designed and evaluated the effectiveness of their internal controls, but it is unclear whether an AI agent’s operating parameters would satisfy that requirement.

Unlike human-authorized transactions, AI agents will produce logs that look very different from traditional approval records, and SOX has no framework for evaluating model parameters and automated outputs as evidence of sufficient internal control. Congress enacted SOX in the wake of Enron’s collapse to ensure financial misconduct could not hide behind inadequate documentation. Without updated audit trail standards for AI-generated logs, agentic procurement risks the same vulnerability at far greater speed and scale. Regulators governing these industries should offer clear guidance on agentic commerce before it arrives.

Agentic commerce is not without risks, but most—fraud, unauthorized transactions, and consumer protection—are not new to the world of digital payments. As Jodie Kelley, CEO of the Electronic Transactions Association, said in a January 2026 testimony to the House Financial Services Committee, “Many existing principles—authorization, consent, liability, auditability—apply in the agentic context.” In other words, existing laws governing these issues should largely apply, and courts will set precedents for how those laws interact with agentic commerce.

Policymakers should not let unresolved policy questions slow the momentum of agentic commerce. To get there, regulators should start by updating rules written for human-initiated transactions to reflect the realities of the agentic age.

Image credit: Tim Reckmann/Flickr

You may also like

Show Buttons
Hide Buttons