AI is not the Everything Box
Why the “labs will do everything” thesis is flawed: models will trend to zero marginal cost; value accrues at the reality-coupled frontier.
There’s a controversial AI Investment thesis making the rounds on X right now, which largely posits the futility of competing with the AI labs on anything because in time the models will come to encompass all things, so there’s no point. The post is here: https://x.com/yishan/status/1987787127204249824
I’ve heard this position from a number of investors behind closed doors, and I think it’s a more commonly held opinion than folks let on.
I think this take is fundamentally wrong, and since I don’t know when the last time something I wrote on X actually got read by anyone I know - I figured I’d paste in my response here on Substack1:
This take is completely wrong, and in so many words is a justification to “sit out” AI from an investment standpoint, or only choose to invest in flash-in-the-pan offerings in a buy low / sell high fashion.
Maybe I’m reading it wrong, but the notion that foundational model companies will grow ever bigger to be able to do everything and anything, means that no one should actually worry themselves with building a business right now because in due time the AI foundation models will come to encompass what they’re doing.
If anything, the rapid increase in AI capability is still upper-bound by the fundamental entropy of the distributions that it’s trained on - further, the ability of AI to learn said distributions does not necessarily mean that it can compositionally deconstruct the underlying processes that led to that distribution.
In the context of the corpus of human knowledge, being able to connect far flung ideas across fields and sectors is already well beyond the capacity of most people and is in of itself a way to allow human operators the ability to discover, develop and build unique and new things. With that said, the underlying AI systems will not do so directly.
As such, I think the opposite is going to be true. Ever expanding AI capabilities will approach a marginal cost of zero and come every closer to the “limit” of the overall corpus of human knowledge. This then allows people to sit at that limit, and step every so slightly beyond it - in a way that will actually accelerate the speed of human innovation vs. the other way around.
If the argument is actually, AI will make it impossible for people to build another Dog Vacay because it’s possible to one-shot the app. Ok, that is true - but it’s not going to let people step past the frontier.
I think the frontier of knowledge is defined by the direct interaction of intelligence and the environment that it’s placed in. In other words, you can be as smart as you want, but the universe “pushes back” and ultimately limits the speed at which the frontier can expand.
So far, AI systems have not been built that can autonomously self-determine and push that frontier out further, and ever improving AI systems will not change this in terms of approaching the limit of the existing corpus of knowledge.
Ultimately, I anticipate the universe sets a “speed limit” here, which while I don’t think humanity has gotten close to yet - but it’s likely finite and hyperbolic in nature.
Separately, if the “AI models” do “all the things” argument is true, we should be 100x more worried about China. CN model developers do not have the sensitivity around IP/Copyright law, they have access to ever expanding to never endering resources, and are fully embracing the notion of open-weights/open-sourced models.
As such, this means that if AI models do all of the things, it also means open-source/free models do all of the things and this ends up being a cliff our economy is about to drive off of in a short period of time when CN models are the obvious go to and are in effect free.
I don’t think the freefall will happen btw - instead, the models will be free, foundational model AI companies will be valued against the utility they deliver to customers, and the availability of exponentially improving AI models that are in effect free and getting exponentially faster will super charge innovation and if anything result in a boost vs. a collapse to investable / value accretive companies.
thanks for coming to my ted talk
As such, I’ve left the conversational tone of a X/Twitter post and have not “gone to town” on editing / revision of this as it’s simply cut/paste directly
