Daniel Ivanov

Generalist Engineer

Sam Altman Was Right About Ads in AI

In 2024, a student at a Harvard open mic asked Sam Altman whether OpenAI would explore ad-supported monetization. Free API access funded by advertising. His answer was surprisingly honest:

I will disclose just as like a personal bias that I hate ads. I think ads were important to get to give the early internet a business model but I think they do sort of somewhat fundamentally misalign a user's incentives with the company providing the service. I'm not totally against them, I'm not saying OpenAI would never consider ads, but I don't like them in general and I think that ads plus AI is sort of uniquely unsettling to me. You know, when I think of like GPT writing me a response, if I had to go figure out exactly how much was who paying here to influence what I'm being shown, I don't think I would like that. And as things go on I think I would like that even less.

He wasn't hedging. He called out the core problem: ads "fundamentally misalign a user's incentives with the company providing the service." And then he predicted he'd like it even less as time went on.

Less than two years later, OpenAI announced ads in ChatGPT.

The Reversal

In May 2025, OpenAI hired Fidji Simo as CEO of Applications, reporting directly to Altman. Simo spent a decade at Facebook, rising to head of the Facebook app while Meta's revenue grew from roughly $3.7 billion to $118 billion. She was one of several leaders across ads, video, marketplace, and groups during that growth. After Meta, she became CEO of Instacart, where she scaled their advertising business. She joined OpenAI's board in March 2024 before the full-time hire.

You don't recruit someone with that background to optimize subscription pricing.

On January 16, 2026, OpenAI officially announced ads would come to ChatGPT. The rollout targets free-tier and ChatGPT Go ($8/month) users in the US. Plus, Pro, Business, Enterprise, and Edu accounts won't see them. Ads will appear at the bottom of answers, clearly labeled, separated from the organic response. OpenAI says ads won't influence responses, conversation data won't be shared with advertisers, and no ads will appear for users under 18 or near sensitive topics.

As of today, the ads aren't externally live. OpenAI's own help center still says "There are currently no ads in ChatGPT" while they run internal testing. But the direction is set.

The finances make the decision easier to understand, if not easier to accept. OpenAI reported $8 billion in losses in the first half of 2025 alone. Annual revenue hit $20 billion, up from $6 billion in 2024, but expenses seem to be growing just as fast. The company committed to over $1.4 trillion in infrastructure deals last year through Project Stargate with SoftBank, Oracle, and others. And only about 5% of its 800 million weekly users pay for subscriptions.

When 95% of your users don't pay, and your costs are measured in billions, the math starts pointing toward ads whether you like it or not.

Why This Is Different

Altman's Harvard answer wasn't just about personal taste. He put his finger on something specific: "if I had to go figure out exactly how much was who paying here to influence what I'm being shown, I don't think I would like that."

With traditional advertising, you can at least see the ad unit. Banner ads look different from content. Even Google's search ads, which have gotten progressively harder to distinguish from organic results over the past fifteen years, still carry a small "Sponsored" label. You know you're looking at an ad, even if you have to squint.

AI doesn't have that boundary. When the output is natural language, the line between recommendation and advertisement can disappear entirely. If your AI assistant recommends a specific tech stack, a particular SaaS tool, or a brand of running shoes, how do you know whether that was the best recommendation or a paid placement? The ad doesn't sit next to the response. The ad could be the response.

OpenAI's current plan places ads at the bottom of answers, visually separated. That's where Google started too. In 2010, Google's ads were obviously labeled, boxed in a different color, clearly separated from organic results. Today, it's genuinely difficult to distinguish sponsored results from organic ones on many queries. There's no reason to believe AI ads won't follow the same trajectory.

The pattern is predictable. Once an ad platform exists, advertisers want better placement. Better placement means closer integration. Closer integration means blurrier lines. This isn't cynicism. It's the history of every ad-supported platform we've built.

The Agent Problem

The advertising question gets worse as AI agents start acting on their own. Today, you read ChatGPT's response and decide what to do with it. Tomorrow, your AI agent might make purchasing decisions, select service providers, choose software tools, and book travel on your behalf.

When an agent picks a specific brand, was that a genuine preference or paid placement? When your coding assistant recommends a particular library, was that the best technical choice or sponsored content? When your personal AI books a hotel, was that the best price-to-quality ratio or an advertising deal?

You'd never know. And unlike a human assistant who might disclose "the hotel offered me a commission," you can't see how an AI agent makes its decisions. The agent puts a layer between the ad and the person it's targeting, making disclosure harder and manipulation easier.

What People Share With AI

There's something about this that doesn't exist with traditional search. People share things with AI that they would never type into Google. Health anxieties, relationship problems, financial fears, career insecurities, parenting struggles. The conversational format makes people open up. That's partly what makes AI assistants useful. It's also what makes ad targeting in that context feel different from a banner ad on a website.

Anthropic's Super Bowl LX campaign nails this. In one spot, a user asks for help getting a six-pack, shares that he's 5'7", and the AI pivots mid-workout-plan to pitch "StepBoost Max" insoles for "short kings." In another, a man in therapy discussing his relationship with his mother gets redirected to a dating site called "Golden Encounters." A woman pitching a business idea gets offered a predatory high-interest loan branded "SHE-E-O Money."

They're parodies. But what they're describing isn't fictional. If an AI knows you're anxious about your height, struggling with a parent, or financially stressed because you told it so in conversation, that's targeting data that no search query would ever reveal. The vulnerability is the product.

Anthropic's Counter-Position

Anthropic's campaign is titled "A Time and a Place," with spots called "Betrayal," "Deception," and "Treachery." It isn't just marketing. It's a business model statement. Their tagline: "Ads are coming to AI. But not to Claude." Daniela Amodei called it "exploitative" to bring ads into AI given the personal and medical information users share.

Whether Anthropic can sustain this position long-term is an open question. They report over 80% of revenue from enterprise customers and claim a $9 billion annual run-rate. That's a different business than one where 95% of users are on a free tier. Anthropic can afford the position because their revenue model doesn't depend on converting free users into ad impressions.

Altman responded on X, calling the Super Bowl spots "clearly dishonest" and "doublespeak." His defense: "More Texans use ChatGPT for free than total people use Claude in the US." And: "Anthropic serves an expensive product to rich people."

He's not wrong about the access point. Free AI that's ad-supported does reach more people than a $20/month subscription. The question is whether that access comes at a cost that users don't fully understand yet.

What I Keep Thinking About

I build AI products for the research community: tools across Dimensions, Altmetric, Figshare, Overleaf. In that context, trust in AI output isn't a feature. It's the whole point. If a researcher asks an AI tool for literature recommendations and gets sponsored results without knowing, the tool isn't just less useful. It's actively harmful.

Bad recommendations in research don't waste time. They waste careers.

That's an extreme case. But the point holds more broadly. Every person using an AI assistant is, to some degree, trusting that the output reflects their interests rather than an advertiser's. The more invisible the ads get, the more that trust erodes. And in natural language, they can get very invisible.

Closing Thoughts

Altman was right at Harvard. Not just about the general unease, but about the specific mechanism. He said he wouldn't like having to figure out "exactly how much was who paying here to influence what I'm being shown." He predicted he'd like it even less as things went on.

He was describing the future he'd build.

The ads aren't live yet. The labels are clear for now. The promises are reasonable today. But so were Google's in 2004. The question was never whether ads would arrive. The economics made that inevitable. The question is whether, once the ad platform exists, anyone can resist the slow drift toward deeper integration, better optimization, and eventually, invisibility.

Altman knew this. He said so himself. And as things go on, I think we'll all like it even less.