[Two Cents #77] “Flights of Thought” on Consumer + AI — Part 3: What the heck is Agent?
Introduction
It’s becoming clear that the market’s “readiness” for Consumer AI has crossed a tipping point.
What we need now is to get far more concrete about how AI-driven market change will unfold—the direction, the mechanisms, and the implications for industry structure, competitive dynamics, and the economics between participants.
For founders, the job is to identify those opportunities a little earlier and move first. For investors, the job is to recognize those early moves quickly and support them aggressively.
This series—my “Flights of Thought”—is an attempt to share how I’m thinking through what will happen, what it will unlock, and what kinds of ideas are likely to matter.
Agents are emerging
My base case is that AI-delivered services will rapidly shift toward an agent-based architecture—especially multi-agent systems.
The problem is that “agent” is one of the most overloaded words in AI right now. It can mean wildly different things depending on who’s using it, and in what context.
So this post does two things:
Break down agents across a few useful lenses, so we can reason about where each type will be used.
Outline the kinds of shifts agents will drive as they become the substrate for AI apps, services, and systems.
We’re still at an early stage. There are more questions than answers. But the direction is becoming clearer.
How to segment AI agents
1) Segment by structure and role
In [Two Cents #71], I proposed five broad types based on how an agent operates and what role it plays.
Human Agents
Agents that replace a human performing a task in the same way a human would, without changing the underlying workflow.
The canonical examples are customer support calls, outbound sales calls, and other human-to-human interactions replicated by an AI.
Auto Agents
You give the agent a goal, and it autonomously executes multiple steps across tools and services to deliver an outcome.
Examples: booking travel, handling shopping, or completing an entire coding task end-to-end (in the spirit of Cursor/Devin).
Workflow Agents
An extension of auto agents. Instead of completing a single task for a user, these agents replace part—or all—of an organization’s workflow over time.
This likely becomes multi-agent / graph-agent by default, with a wide spectrum of human-in-the-loop designs.
One example of the “shape” of this: Stanford experiments where a team of agents operated like a research group over an extended period and produced meaningful scientific outputs (including protein design work relevant to COVID therapeutics).
Ambient Agents
Always-on, cloud-resident agents that proactively work in the background on behalf of a specific user.
They can access personal data sources (email, calendar, health, financial transactions), make decisions autonomously or with human confirmation, and trigger actions.
Virtual Human Agents
Agents whose primary role is to be a human-like counterpart: companions, “AI waifus,” and character-based agents.
Examples: CarynAI, Character.AI personas, and “Samantha” from Her.
This category can expand further—NPCs and characters in virtual worlds and games, or even autonomous “organizations” built to pursue an objective (a hedge fund agent collective, an IP manager, etc.).
(These labels are simply a working taxonomy. New terms will emerge and stabilize—as “ambient agent” already has.)
2) Segment by position inside an AI service ecosystem
In a true multi-agent environment, agents will exist as a population: created, destroyed, and persisted depending on who initiates requests and what roles need to be served.
This isn’t a clean MECE taxonomy—more a practical set of examples.
Personalization agents
My secretary/concierge agent: handles requests on my behalf, confirms decisions and outcomes with me, and orchestrates execution. (Do we strictly need this to be an “agent”? Possibly not—but the role will exist.)
My data layer / personalization router: manages access to my personal data sources and responds to personalization requests. This could look like an agent, an MCP-style server, or a dedicated data layer. In some architectures, the data layer itself could be implemented as a multi-agent system.
Open questions here include where this lives and how it operates:
On-device capture and inference vs. cloud-based ambient agents
Role split between device and cloud
Privacy, delegation, and trust boundaries
Service agents
Platform agents (commerce, travel, marketplaces, financial services): receive tasks, execute them, and return results. In many cases, this might be better represented as an MCP server rather than a full “agent,” depending on interaction patterns.
If the interaction is synchronous request-response, an MCP server may be sufficient.
If the interaction is asynchronous—e.g., bidding workflows—agent-based participation makes more sense.
These agents can take multiple shapes:
Execute tasks within a platform and return results (the platform interface becomes agent-facing rather than human-facing).
Request bids or results from other services/agents.
Decompose tasks into sub-tasks, orchestrate multiple agents, negotiate across them, and assemble a final output—effectively acting as an intermediary inside a multi-agent graph.
Brand agents
Agents that represent a brand or service externally—serving other agents.
They might:
Respond to AI search queries with structured outputs (the GEO world), or
Submit bids/proposals in response to a user’s project request (leaf nodes inside a multi-agent market).
As the ecosystem becomes more agent-native, we should expect many more agent roles to emerge.
3) Segment by automation maturity
You can also classify agents by capability level, though I’m not sure this is always the most useful lens:
Glorified personalized help: personalized Q&A
Reactive execution: performs requested tasks (most “agents” today)
Proactive recommendations: suggests actions beyond explicit requests
Proactive action: recommends and completes tasks
Autonomous workflows: fully end-to-end execution, including collaboration/negotiation with other agents and multi-agent OKR-style work
Other possible segmentation
We could also segment by interaction behavior: initiating agents, serving agents (MCP servers), navigating agents, ambient agents, etc. But it’s not yet clear whether this adds real explanatory power.
What is clear: the division of labor between agents—and the UI/UX for how agents interact with users and services—will remain a major design problem for years.
How agents change the world
1) UI/UX and consumer interaction patterns
The first thing we need to internalize is that once autonomous multi-agent systems (including ambient agents) become real, the interaction model we’ve lived with for decades can change fundamentally.
Modern UX is essentially:
A GUI presents options → a human selects an action → the system responds → repeat as request-response pairs until the user gets what they want.
The core ingredients are:
presenting selectable options,
actions as request-response pairs,
sequences of human-driven steps.
In a multi-agent world—especially with ambient agents—each of these breaks.
The system can become headless, intent-driven, and agent-initiated, with a new balance between autonomy and user engagement. UI becomes less about “navigation” and more about confirmation, exception handling, and control boundaries.
Today, most consumers still can’t directly experience a real multi-agent environment. But early experiments are showing up (e.g., products like Eigent—still developer-leaning at this stage).
Aside: it’s notable how many recent agent-focused projects come from China (Eigent, Skywork, Fellou, Manus). My guess is that China’s hyper-competitive consumer internet environment forces faster experimentation on new interaction patterns.
This new interaction model will take time to reach mass-market polish—but I wouldn’t bet on it taking more than ~12 months before we see commercial products that feel meaningfully different from today’s UI.
2) Business model and market structure shifts
When agents act as delegates—executing work on behalf of humans—the interaction and power dynamics between stakeholders (users, services, agents) will shift dramatically.
This is the engine behind the “fundamental shifts” discussed in [Two Cents #75]. Agents—especially autonomous multi-agent systems—pressure the existing division of labor across platforms, distribution, and monetization.
Commerce
Agent commerce will reshape:
competitive dynamics among existing commerce players, and
the emergence of new purchase funnels that bypass incumbents entirely.
We’re already seeing divergent strategies from incumbents:
Some have no clear agent policy yet.
Some block third-party shopping agents entirely.
Others allow discovery but block checkout.
The specific position depends on each player’s moat, market posture, and incentive structure. This will broaden into many more disputes and equilibrium shifts until the market stabilizes under new “rules.”
Search and other consumer categories
Commerce shifts cascade directly into search (because purchase intent is a large fraction of high-value search). And the impact won’t stop there—marketplaces, fintech, and essentially every consumer category will feel the downstream effects.
When market structure rewires, startup opportunity expands. That’s why the phrase “this is the last big opportunity before AGI” resonates—it captures the idea that platform transitions create rare windows where rules reset.
A2A infrastructure: what needs to exist
I covered parts of this in [Two Cents #71], but it’s worth repeating. A multi-agent, A2A world requires new rails and operating norms. Much of it is still underwater, but momentum is building.
Areas likely to become very active:
Payment rails
M2M payment infrastructure for agent transactions.
Stablecoin-based rails are accelerating, and we’re seeing competition across incumbents and startups.
Identity rails
Authentication for agent transactions, proof-of-personhood for delegated actions, and delegation / access management.
Incumbents like Okta may move early, though startups will emerge.
Personalization data layer
Likely the core asset layer in consumer AI.
Security
Permissions, execution scope, and human-in-the-loop controls.
Privacy
Agent access to sensitive personal data (health, finance), and potentially on-chain or self-sovereign privacy architectures.
Tools
Automation and orchestration frameworks, observability, permissioning, and HITL systems.
Additional issues that will matter
Agent identity
Temporary vs persistent agents (task-based vs daemon-like ambient agents)
Who the agent represents (user vs service)
Machine-readable task specs
Delegation boundaries (e.g., purchase authority)
Agent code of conduct
We already see standards forming for LLM crawling (e.g., llms.txt as an analogue to robots.txt).
As agents gain delegated authority to decide and act, we will need behavioral and interaction norms—early examples like Agent Interaction Guidelines (AIG) point in that direction.
If agents can even “hire” humans to complete tasks, we’ll need conventions for that too.
More broadly, this will be a technical and social co-evolution problem. The system will converge—through trial and error—on norms that are not perfectly optimal, but acceptable enough for most participants.
In a sense, it’s a micro-version of how human societies evolved governance structures over time.
Key traits and implications of the agent transition
Because we are still early, the downstream effects—consumer behavior shifts, stakeholder dynamics, new economic equilibria—are not yet fully visible.
Most “agents” in market today still look like workflow automation with a thin layer of reasoning—call it “Zapier plus.” Even impressive products often feel like “Zapier plus” combined with deeper research modes.
Ambient agents are even earlier. Many features that look “ambient” today are effectively daemon-style automations that draft actions (e.g., email replies) and request confirmation.
The deeper shifts—new UI/UX primitives and new business structures—will emerge gradually as ecosystems become more agent-native. This may take 5–10 years to fully play out. That sounds long until you remember: Uber took ~11 years from founding to IPO.
What will drive the structural change?
Machine-to-machine interaction (M2M)
Agents interacting with services or other agents flips a foundational assumption: the “decision-maker” is no longer a human. That changes discoverability, flows, checkout, and incentives. Entire systems will need to be redesigned around this assumption.
Autonomy
Agents can make and execute decisions—sometimes material ones (format of requests, purchase choices, checkout)—which forces new primitives: authentication, delegation, privacy, liability boundaries, and new revenue-sharing models.
There are likely more drivers, but these two alone are enough to force rewrites across the stack.
Over the next 12–18 months, I expect the agent category—especially multi-agent A2A—will be one of the most dynamic areas in AI. We’ll see rapid experimentation, real failures, and iterative construction of the infrastructure and norms needed for a sustainable equilibrium.
Some of this will sound like science fiction. But it’s increasingly “SF” in the other sense: San Francisco, happening now. (Yes—pun intended 🙂)
Call for Startups
The purpose of sharing this thinking is straightforward. As an early-stage investor focused on Consumer + AI, I hope this series helps existing startups better leverage AI-driven shifts—and helps new founders reduce trial-and-error as they search for meaningful opportunities.
In that sense, this is Two Cents’ version of a Call for Startups.
If you are an early-stage founder or startup in Consumer + AI and believe you are onto something, my inbox is always open. Feel free to reach out via DM or email:
hur at hanriverpartners dot com

