Trust & Security/8 min/

The Trust Problem: How Do You Let an AI Spend Your Money?

Autonomous payments require a new trust architecture. India's existing infrastructure provides the building blocks.

Here's the question nobody wants to answer: if your AI agent can spend money, what stops it from spending all your money?

This isn't a theoretical concern. It's the core design problem of agentic payments. And the answer isn't "better AI" — it's better infrastructure.

The Spectrum of Autonomy

Not every transaction needs full autonomy. The mistake most people make is treating agent payments as binary — either the agent can pay or it can't.

In practice, there are five levels:

Level 0: Human Does Everything. Agent finds the product, human makes the payment. This is where we are today.

Level 1: Agent Proposes, Human Approves. Agent prepares the payment, sends a notification, human taps "approve." One step removed.

Level 2: Pre-Approved Ranges. Agent can spend up to ₹5,000 on approved categories without asking. Anything above triggers approval. This is where UPI AutoPay mandates become powerful.

Level 3: Smart Mandates. Agent operates within dynamic rules — spend up to ₹2 lakh per month on whitelisted vendors, but flag anything from a new vendor. Rules adjust based on patterns.

Level 4: Full Autonomy. Agent manages a budget, makes all spending decisions, reports periodically. Only for mature, trusted agents operating within proven parameters.

Most agentic commerce will live at Levels 2 and 3 for the foreseeable future. And that's fine — it's still a massive improvement over Level 0.

Why UPI Mandates Are the Perfect Primitive

UPI AutoPay mandates were designed for recurring payments — EMIs, subscriptions, utility bills. But their architecture maps remarkably well to agent spending controls:

  • Amount caps — maximum per transaction
  • Frequency limits — daily, weekly, monthly
  • Merchant restrictions — specific VPAs only
  • Revocability — instant cancellation by the user
  • Transparency — every debit is notified in real-time

This is not a hack. It's a legitimate use of existing infrastructure. The mandate system gives users granular control over what their agents can spend, where, and how much.

The Verification Stack

Trust isn't just about spending limits. It's about knowing who you're paying.

When your agent needs to pay a vendor it hasn't worked with before, it needs to answer three questions:

  1. Is this entity real? — GSTIN verification against the GST portal (instant, free)
  2. Is this entity solvent? — MCA filing check for company financials (minutes)
  3. Is this bank account legitimate? — Penny drop verification (10 seconds, ₹1)

Total time: under 2 minutes. Total cost: ₹1.

India's verification stack makes real-time trust establishment possible. Your agent can discover a new vendor, verify them, and pay them — all within a single workflow. No paperwork, no waiting, no manual review.

The Escrow Pattern

For agent-to-agent transactions, trust is bilateral. Neither agent trusts the other fully. The solution is programmatic escrow:

  1. Buyer agent deposits payment into escrow
  2. Seller agent delivers the work
  3. Buyer agent verifies delivery
  4. Escrow releases payment

This pattern exists in human commerce — freelancing platforms, real estate closings. The difference is speed. With UPI settlement times, the entire escrow cycle can complete in minutes.

The key insight: escrow doesn't require a new financial product. It requires a well-designed smart contract layer on top of existing UPI rails.

What Could Go Wrong

Honest risk assessment:

Prompt injection attacks. A malicious vendor could craft responses that trick an AI agent into approving higher payments. Mitigation: payment decisions should never be made by the same model processing vendor communications.

Mandate abuse. An agent with a ₹2 lakh monthly mandate could theoretically drain it on day one. Mitigation: velocity checks — daily sub-limits, anomaly detection, cooling periods for new vendor relationships.

Identity spoofing. A fraudulent entity could present valid-looking GSTIN credentials. Mitigation: cross-reference multiple data sources — GSTIN + bank verification + MCA filings.

Model errors. The agent genuinely makes a bad spending decision — not malicious, just wrong. Mitigation: human-reviewable audit logs, automatic flagging of decisions that deviate from historical patterns.

None of these risks are unsolvable. But they require infrastructure-level solutions, not application-level patches.

The Audit Trail

Every agentic transaction needs to be explainable. Not just for compliance, but for user trust.

The minimum viable audit trail:

  • Why the agent decided to make this payment
  • What verification steps were taken
  • How the amount was determined
  • When the mandate authorization was granted
  • What the fallback was if the transaction failed

This isn't a nice-to-have. It's the difference between users trusting the system and users turning it off.

Building Trust Gradually

The path to autonomous payments isn't a switch you flip. It's a gradient you traverse:

Start with Level 1 — agents propose, humans approve. Build confidence. Move to Level 2 — pre-approved small amounts. Build more confidence. Gradually expand limits as the system proves reliable.

This is how every payment system in India has worked. UPI started with ₹1 lakh caps, now it's ₹5 lakh. BHIM initially limited transaction frequency, then relaxed it. The trust ceiling rises with demonstrated reliability.

Agentic payments will follow the same curve. The question is who builds the infrastructure that makes the curve possible.