Intelligence is $20 a month

All major AI subscriptions seem to cost $20 a month. Why is that?

We can speculate. The origin could be as innoculous as “it’s the going rate for premium subscriptions, see Netflix” or “yea, that’ll be enough to tease investors without terrifying users”. And that line of thinking (or some variant of monkey-see-monkey-do) propagated the price to all the rest.

But speculation is no fun. We won’t be able to answer exactly why $20 a month is the price, but we can have a more thoughtful discussion on whether $20 a month makes sense based on the cost of the underlying technology. And by that I mean, the cost of creating a ChatGPT-like application using an AI company’s underlying APIs.

API Pricing

Here’s the API pricing for large providers for each of their latest/most popular models.

ProviderModelInput Price (per 1M tokens)Output Price (per 1M tokens)
OpenAIGPT-4o$2.50$10source
AnthropicClaude 3.5 Sonnet$3$15source
GoogleGemini 1.5 Pro$1.25$5source
Mistral AIMistral Large 24.11$2$6source

All these prices are less than $20 but with the ominous condition of per 1M tokens. 1 million tokens sounds like a lot but what the hell is a token? When we chat, we think in terms of words and sentences and paragraphs, so let’s bring this pricing back to human-readable terms.

What is 1 million tokens?

A token is the smallest unit of language that a large language model “understands”. Whether a token maps to a word or a character or something else all depends on what method you use to tokenize your text.

Take the sentence “$20 is a lot of money” for example. We could tokenize this by splitting on spaces and produce the following list of tokens.

["$20", "is", "a", "lot", "of", "money"]

Or we could tokenize this by splitting on characters and produce a list of

["$", "2", "0", " ", "i", "s", " ", "a", " ", "l", "o", "t", " ", "o", "f", " ", "m", "o", "n", "e", "y"]

In practice, companies improve upon these basic techniques with methods like byte-pair encoding where tokens are determined based on the occurrence frequency of pairs of characters. Ultimately, the point of tokenizing is to allow models to develop more nuanced understandings of language by exposing patterns in language structure at the sub-word level (e.g. knowing “un” + “happy” as separate tokens can help you infer what “unhappy” means).

In any case, every company may have a different tokenization process. We can’t exactly translate 1 million tokens to human-readable text without knowing their process.

That said, we can reasonably estimate using rules of thumb from OpenAI. One English sentence is roughly 18 tokens (1 token is roughly 4 characters, the average English word is 5 characters, the average English sentence is 15 words)

1 million tokens then is >55k sentences/>800k words/~4 copies of Moby Dick, a ~600 page novel. Sending all that to GPT-4o is $2.50. If we assume it outputs exactly the same amount ($10 for 1 million output tokens), then for $12.50, you can go back and forth with GPT-4o for a good long while.

It’s important to note here that in a typical AI chat experience, you’re not sending just a single message at a time. You’re actually sending the entire conversation history along with each new message at a time. This makes up the model’s “context” so it can “remember” what’s been said. To account for this, let’s halve our prior estimate — for $12.50, you can type and receive about 2 novels worth of chat-like conversation rather than 4.

As long as you fit those limits, that’s clearly cheaper than the premium subscription, and it could get even cheaper if you choose a more cost-effective API provider. But, there’s more to consider here.

The value of an app

The premium subscriptions do much more than just let you send requests to a model. They let you chat! And there are tons of conveniences built around the chatting experience — persistent conversations, easy multi-modal inputs, great UX.

That said, there are now many open source tools that allow you to plug in an API key and get a similar experience. Some require self-hosting which is its own cost, but some are hosted for you.

In which case, the UI offered by premium apps is mere table stakes, not further justification for pricing.

Does $20 a month make sense?

Everything we’ve discussed so far suggests that $20 a month is too high. You can get the raw functionality for cheaper and the chat experience for free. From a purely rational perspective, no, a premium AI subscription is not worth $20 a month when those same companies offer what are (effectively) cheaper, equivalent alternatives.

But as the saying goes, “something is worth what someone is willing to pay for it”. And millions of people think an OpenAI pro subscription is absolutely worth $20 a month. Maybe it’s the convenience of clicking a few buttons over stitching together an API key and a UI. Maybe it’s a matter of trust. Maybe it’s a preference for brand or design. Maybe it’s a power user who types novel after novel. Whatever it is, the value is there.

What’s next?

How long will $20 a month stay the norm? I think in the future we’ll see much more flexibility around pricing.

For now, everyone is competing on quality and features. AI providers are telling customers “we’re just as good as them, look at our offering not our price point”. They can afford to do so because they’re bank rolled by the largest investors and companies on the planet (or are themselves the largest companies on the planet).

Eventually, as the market matures, we’ll get to see players that compete on price.

Today, intelligence is $20 a month. Tomorrow… well, here’s what ChatGPT Plus has to say: “Intelligence in the future will cost as much as electricity today: scalable, accessible, and priced by usage.”