Advertising is the best way to bring internet services to as many consumers as possible.
Author: Bryan Kim
Translation: Deep Tide TechFlow
Deep Tide Introduction: The internet is a universal access miracle that leads to opportunities, exploration, and connection. And advertising pays for this miracle. a16z partner Bryan Kim points out that OpenAI announced last month plans to introduce advertising for free users, which could be "the biggest non-news news of 2026 so far."
Because if you've been paying attention, the signs of this happening are everywhere. Advertising is the best way to bring internet services to as many consumers as possible.
Data shows that the conversion rates for consumer AI subscription companies are quite low (5-10%). Most people use AI for personal productivity tasks (writing emails, searching for information) rather than high-value pursuits (programming). 5-10% of 800M WAU paying is already 40-80M people, but to reach a billion users, advertising is needed.
The full text is as follows:
The internet is a universal access miracle that leads to opportunities, exploration, and connection. And advertising pays for this miracle. As Marc has long argued, "If you take a principled stance on advertising, you are also taking a stance on broad access." Advertising is the reason we have beautiful things.
Therefore, OpenAI announced last month plans to introduce advertising for free users, which could be "the biggest non-news news of 2026 so far." Because of course, if you've been paying attention, the signs of this happening are everywhere. Fidji Simo joined OpenAI as CEO of Applications in 2025, and many interpreted this as "implementing advertising, just like she did at Facebook and Instacart." Sam Altman has been previewing the rollout of advertising on business podcasts. Tech analysts like Ben Thompson have been predicting advertising almost since the launch of ChatGPT.
But the main reason advertising is not surprising is that they are the best way to bring internet services to as many consumers as possible.
The Long Tail of LLM Users
The term "luxury beliefs," which became popular a few years ago, refers to taking a stance not for principled reasons but for optical reasons. There are many such examples in the tech world, especially regarding advertising. Despite all the moral gymnastics around bingo words like "selling data!" or "tracking!" or "attention harvesting," the internet has always relied on advertising, and most people prefer it this way. Internet advertising has created one of the greatest "public goods" in history, at a negligible cost—occasionally having to watch ads for cat sleeping bags or hydroponic living room gardens. Those who pretend this is a bad thing are usually trying to prove something to you.
Any internet history enthusiast knows that advertising is a core part of how platforms ultimately monetize: Google, Facebook, Instagram, and TikTok all started for free and then found monetization through targeted advertising. Advertising can also be a way to supplement low-value subscriber ARPU, as in the case of Netflix's newer $8 monthly option, which introduced ads to the platform. Advertising has done a great job of training people to expect most things on the internet to be free or very low cost.
We can now see this model emerging in frontier labs, specialized model companies, and smaller consumer AI companies. From our survey of consumer AI subscription companies, we can see that converting subscription users is a real challenge for all these companies:

So what is the solution? As we know from past consumer success stories, advertising is often the best way to extend services to billions of users.
To understand why most people do not pay for AI subscriptions, it helps to understand what people use AI for. Last year, OpenAI released data on this.

In short, most people use AI for personal productivity: writing emails, searching for information, tutoring, or advice. Meanwhile, high-value pursuits, like programming, account for only a small portion of total queries. Rumor has it that programmers are among the most loyal users of LLMs, with some even adjusting their sleep schedules to optimize daily usage limits. For these users, a monthly subscription fee of $20 or $200 does not seem excessive, as the value they receive (equivalent to a group of efficient SWE interns) may far exceed the subscription cost.
But for users who use LLMs for general queries, advice, or even writing help, the actual payment burden is too high. Why would they pay for answers to questions like "Why is the sky blue?" or "What are the causes of the Peloponnesian War?" when Google search used to provide a good enough answer for free? Even in the case of writing help (some people do use it to complete email work and routine tasks), it often does not accomplish enough of a person's work to justify the individual paying for a subscription. Moreover, most people typically do not need advanced models and features: you do not need the best reasoning model to write emails or suggest recipes.
Let's take a step back and acknowledge a few things. The absolute number of people paying for products like ChatGPT is still huge: 5-10% of 800M WAU. 5-10% of 800M is 40-80M people! Most importantly, the Pro $200 price point is ten times what we consider the upper limit for consumer software subscriptions. However, if you want to get ChatGPT to a billion people (and beyond) for free, you need to introduce products beyond subscriptions.
The good news is that people actually do like advertising! Ask an average Instagram user, and they might tell you that the ads they receive are very useful: they get products they really want and need and make purchases that genuinely improve their lives. Qualifying advertising as exploitative or intrusive is regressive: perhaps we feel this way about TV ads, but most of the time, targeted ads are actually quite good content.
I am using OpenAI as an example here (because they have been one of the most candid labs in terms of full disclosure on usage trends). But this logic applies to all frontier labs: if they want to scale to billions of users, they will ultimately need to introduce some form of advertising. The consumer monetization model in AI is still unsolved. In the next section, I will outline some methods.
Possible AI Monetization Models
My general rule of thumb in consumer application development is that you need at least 10 million WAU before introducing advertising. Many AI labs have already reached this threshold.

We already know that ad units are coming to ChatGPT. What might they look like, and what other advertising and monetization models are feasible for LLMs?
1. Higher-value search and intent-based advertising: OpenAI has confirmed that this type of advertising (recipe ingredients, travel hotel recommendations, etc.) will soon be launched for free and low-cost tier users. These ads will be distinguished from answers in ChatGPT and will be clearly marked as sponsored.
Over time, ads may feel more like prompts: you would prompt the intent to purchase something, and the agent would complete your request end-to-end, selecting from sponsored and non-sponsored content lists. In many ways, these ads remind us of the earliest ad units from the 90s and 2000s, as well as the content perfected by Google through its sponsored SEO ad units (notably, Google still derives the vast majority of its revenue from its advertising business and only entered subscriptions after more than 15 years of history).
2. Instagram-style contextual advertising: Ben Thompson points out that OpenAI should have introduced advertising to ChatGPT responses earlier. First, it would have allowed non-paying users to adapt to advertising sooner (at a time when they had a real lead in capabilities over Gemini).
Secondly, it would have put them ahead in building a truly great advertising product that predicts what you want, rather than opportunistically serving ads based on intent-based queries. Instagram and TikTok can provide amazing advertising experiences, showing you products you didn't know you wanted but absolutely need to buy immediately, and many find the ads useful rather than intrusive.
Given the amount of personal information and memory that OpenAI possesses, there is ample opportunity to build similar advertising products for ChatGPT. Of course, there are differences in the experiences of using these applications: can you translate the more "laid-back" advertising experience of Instagram or TikTok into a more engagement-focused model for using ChatGPT? This is a much more difficult question and a more profitable one.
3. Affiliate commerce: Last year, OpenAI announced partnerships with marketplace platforms and individual retailers to launch an instant checkout feature that allows users to make purchases directly in chat. You can imagine this being built into its own dedicated shopping vertical, where agents actively seek clothing, home goods, or rare items you are tracking due to their limited availability, with model providers earning revenue shares from the marketplace displayed in this service.
4. Gaming: Gaming is often forgotten or overshadowed as its own advertising unit, and we are unsure how they would fit into ChatGPT's advertising strategy, but they are worth mentioning here. App install ads (many of which are mobile games) have been a significant part of Facebook's advertising growth for years, and gaming is so profitable that it is not hard to imagine a large advertising budget emerging here.
5. Goal-based bidding: This is an interesting one for fans of auction algorithms (or those looking to pivot from LLMs to former blockchain gas fee optimizers). What if you could set a bounty for specific queries (e.g., $10 for Noe Valley real estate alerts) and have the model invest a super amount of computation on specific results? You would get perfect price discrimination based on the "value" of determined questions, and you could also obtain better guarantee reasoning chains for searches that are particularly important to you.
Poke is one of the best examples of this: people must explicitly negotiate subscription services with the chatbot (of course, this does not map to computational costs, but it is still an interesting illustration of what it might look like). In some ways, this is already how certain models work: both Cursor and ChatGPT have routers that select models for you based on the complexity of the queries explained. But even if you select a model from a dropdown menu, you cannot choose the underlying computational amount the model invests in the question. For highly engaged users, the ability to specify in dollar amounts how much value a question has for them could be very appealing.
6. AI Entertainment and Companion Subscriptions: AI users exhibit two main use cases for which they are willing to pay: coding and companionship. CharacterAI has one of the highest WAU counts of any non-lab AI company. They can also charge a subscription fee of $9.99 for their services because they offer a mix of companionship and entertainment. However, even when people do pay for companion applications, we have not yet seen companion products cross the threshold where they can reliably monetize through advertising.
7. Token-based Usage Pricing: In the realm of AI creative tools and coding, token-based usage pricing is also a common monetization model. This has become an attractive pricing mechanism for companies with advanced users, allowing them to differentiate and charge more based on usage.
Monetization in AI remains an unsolved problem, with most users still enjoying the free tier of their preferred LLM. But this is only temporary: the history of the internet tells us that advertising will find a way.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。