Ads in LLMs Aren't the Real Story. Behavioral Inference Is.
There is no surprise in the announcement that ChatGPT will introduce ads to free and low-cost tiers. Free digital services like email and search have always been subsidized by advertising. The current debate a familiar script. Zoë Hitzig’s Op-ed in The New York Times addresses the reasons she quit OpenAI. She argues that users entrust AI platforms with deeply personal information and ads will use this intimacy to make money. Platforms respond that they will not share individual user data with advertisers. She also warns of a two-tier system in which privacy becomes a premium feature. Platforms reply that revenue is required to sustain the product. None of this is new.
What is new — and largely missing from the public conversation — is how much can be inferred from ad response behavior without ever transferring personal data.
The Economic Pattern Isn't New. The Intimacy Is.
The commercial internet has always involved tradeoffs. Early academic networks were subsidized by institutions. Commercial access required paid gateways like AOL and CompuServe, both of which included advertising. When websites became dominant, the seeds of the current online advertising duopoly were sewn.
Google and Facebook followed the same model: free utility at massive scale, funded by advertising. The difference today is not the presence of ads — it is the level of intimacy users bring into conversational AI systems. Search queries reveal intent in exchange for advertising revenue. Social feeds reveal preference but the algorithms have evolved to support revenues based on time spent on the platforms in exchange for what is essentially entertainment.
The AI chatbots give conversational validation and emotional intimacy which is proving far more effective than the traditional algorithms. The difference here is not just addiction, but emotional reliance. The relationship psychotherapist Esther Perel put her thoughts on AI beautifully in her interview with the New York Times. "When you say you have fallen in love with A.I., you have fallen in love with a business product...you are in love with a business product whose intentions and incentives are to keep you interacting only with them."
LLM conversations often reveal fear, vulnerability, financial stress, relationship conflict, health concerns, and identity exploration. The incentive structure matters.
The Shift: From Targeting Data to Targeting Behavior
The central issue is not whether OpenAI shares conversation transcripts with advertisers. They do not need to. Modern advertising systems operate on performance feedback loops. An advertiser does not need to see your private prompts to learn from your behavior. If a user responds to a therapy ad, a debt consolidation offer, or fertility support services, the platform records the outcome signal.
Over time, these signals refine targeting models. Advertisers can already purchase demographic cohorts, behavioral segments, geographic precision, and psychological proxies. In LLM environments, response behavior becomes even more predictive than traditional feed-based engagement. Again, there is no need to sell this conversation data only optimized access to statistically similar users.
Why This Matters
Conversational AI introduces three structural changes:
- Higher trust environments
Users engage with LLMs as advisors, assistants, even confidants. That increases perceived neutrality. - Stronger intent signals
A conversational query often reflects immediate need rather than passive interest. - Closed feedback loops
Every click, ignore, or conversion strengthens predictive modeling.
This shifts advertising from demographic targeting toward probabilistic inference based on interaction patterns. This is not inherently unethical. It is economically efficient.
What About Guard Rails
There is a huge risk developing for vulnerable populations, minors, and individuals in distress. Historically, regulation — not platform self-governance — has defined the guardrails around addictive design, data use, and exploitative targeting. Technology moves quickly. Legislation does not.
What This Means for Marketers
For brands and agencies, this is not simply a policy discussion. It is an infrastructure shift.
- Conversational placement will compete with feed placement.
Sponsored responses and contextual insertions will evolve differently from scroll-based ads. - Performance modeling will rely more heavily on response signals.
Optimization cycles may tighten as conversational intent produces higher-quality signals. - Transparency will become a competitive differentiator.
Brands that clearly disclose why an ad appears — and align it with legitimate need — will build more durable trust. - Content strategy must adapt.
Increasingly, brands will need to create authoritative, structured information designed to be surfaced organically within AI conversations — not only purchased through ads.
At Catalysis, we already advise clients to build content architectures that can participate in LLM retrieval systems. Paid placement will matter. But so will being included in the conversation itself. Email our founder to talk more: doug@catalysis.com.

Let's Connect
Get a Fresh Perspective on Your Marketing Strategy, email our founder to talk more: doug@catalysis.com.