Author: Ben Basche
Translation: Deep Tide TechFlow
Deep Tide Introduction: "In the gold rush, sell shovels" was once the golden rule in the startup community. But in the AI era, this logic has broken down—because the miners themselves have opened hardware stores. OpenAI, Anthropic, and Google are systematically consuming middleware layers, programming assistants, browser automation, and other startup tracks. Author Ben Basche believes that the AI companies that can truly survive are not those that sell tools, but those that use AI as raw material in vertical fields like "jewelers"—delving deep into specific industries, mastering local knowledge, and possessing irreplaceable context.
The full text is as follows:
There is a saying that became the gospel of the startup community before and after the first internet bubble: "In the gold rush, sell shovels and picks." The meaning is that the real money-makers are not the miners, but the suppliers to the miners. The ones that got rich were Levi Strauss, not those gold diggers.
It’s a good framework. It indeed worked for a period of time.
But in the AI field, it is wrong. If your company is built on this logic, you may need to take a hard look at what has happened over the past twelve months.
The laboratory is the entire tech stack
Here is what actually happened—first quietly, then suddenly breaking out completely.
OpenAI released Operator, a computer agent capable of browsing the web, filling out forms, and executing tasks end-to-end. Then they released the Responses API and Agents SDK, allowing developers to access native tool calls, memory, and orchestration capabilities without a third-party framework. Next came Codex, a cloud programming agent capable of autonomously writing, testing, and iterating software. Added to this is Deep Research. Any one of these products, two years ago, would have been sufficient to support a funded startup.
Anthropic released Claude Code, Computer Use, Projects with persistent memory, and MCP (Model Context Protocol)—almost overnight, it became the mainstream standard for connecting AI with external tools and data. Then they donated MCP to the Linux Foundation to ensure it is infrastructure rather than a product. They also launched Claude in Excel, Claude in Chrome, and Cowork.
Google released Gemini 2.0, natively embedding tool calls and multimodal perception capabilities, integrating Vertex AI as an enterprise-level agent control plane, providing organization-level policy and orchestration out of the box.
Each of these moves eats away at the territory once held by some startup.
The logic of "selling shovels" carries an implicit assumption: laboratories will stay in their lanes. They produce foundational models, provide APIs, leaving the tool layer, orchestration layer, and application layer to the ecosystem. This assumption is dead.
The middleware massacre
Let’s take a look at what specifically happened in the middleware layer.
LangChain was the most typical "selling shovels" bet during the AI boom of 2023. It is an orchestration framework for stringing together LLM calls, connecting tools, and managing memory. Thousands of teams built products based on it, with over 100,000 GitHub stars. By 2024, various teams began blogging about why they would dismantle it from production environments. Not because it was bad, but because the underlying models had become smart enough to not need it. The abstract layer built by LangChain was addressing yesterday’s problems.
Meanwhile, OpenAI released its own Agents SDK. Microsoft released AutoGen and Semantic Kernel. The laboratories and their parent companies did not acquire LangChain. They simply built the tasks that LangChain performed natively on their own platforms.
The same script plays out at every layer. Agent frameworks, prompt management tools, RAG pipelines, evaluation frameworks, observability tools. All these are being absorbed into native products by vendors running underlying models.
The harsh reality is: when OpenAI or Anthropic directly embed orchestration capabilities into their APIs, they do not need to win functionally. They just need to be "good enough" and "already there." Developers will default to the path of least resistance. That startup with clever middleware must achieve a significant lead and maintain that advantage while competing against rivals with infinite capital and control of the underlying infrastructure. That is not a business; it is a research project with a countdown.
The miners have opened hardware stores themselves, so shovels can’t be sold
The analogy of "selling shovels" fails in the AI field because of a key structural difference. In 1849, Levi Strauss and those hardware merchants did not mine for gold themselves. The miners and suppliers were independent roles with separated interests.
In the AI field, laboratories are both mining, selling shovels, building roads, and printing maps. They have ample motivation to own the entire tech stack because the more layers they control, the more lock-in points, profit expansion opportunities, and distribution moats they have.
When Anthropic donated MCP to the Linux Foundation, that was not charity. That was ensuring that a standard they designed becomes universal infrastructure, just as Ethernet became a common standard. Standards are the tech industry’s strongest moats because they are intangible and permanent.
So, if your startup's value proposition is "we sit between developers and models, making X easier," you need to face a fact: the entity you are sitting between has already taken notice of you, has the resources to replicate you, and has structural reasons to do so.
So what really works?
Returning to the gold rush metaphor. If shovels can no longer be sold, what should you sell?
Sell jewelry.
Or, to put it better: treat gold as an industrial raw material and make things that miners themselves are not interested in making.
In the real gold rush of 1849, the businesses that weathered the boom were not those selling generic tools. They were the ones that treated gold as raw material, using deep expertise to create specific products. Jewelers, dentists, later electrical engineers. These people understood specific application scenarios to a depth that generalists could not.
The AI version is building applications in vertical fields—those that require real-world context that laboratories do not possess and find hard to acquire.
Think about what OpenAI, Anthropic, and Google structurally do not excel at:
They do not deeply understand the workflows in your industry. They are not connected to your customers. They cannot cost-effectively acquire the proprietary data that makes models truly useful in specific scenarios. They will never delve into why South African individual artisans develop invoices in that manner, or why Kenyan mobile payment integration is not simple, or why US medical pre-authorization is a specific, tricky, deeply embedded operational issue.
Laboratories are building horizontal infrastructure. The opportunity lies in vertical fields—those that require geographical, regulatory, cultural, and industry-specific local knowledge to truly work.
That is why fintech in emerging markets, legal AI targeting specific jurisdictions, compliance tools in regulated industries, and workflow automation in niche professional fields are all more defensible than "building a better LangChain."
The moat is not in the model. The moat is in the context.
The industrial use of gold
This idea has a second version worth clarifying: use AI like industrial gold. Not as a store of value or display item, but as a component embedded in something that creates lasting economic value.
The conductivity of gold is almost unmatched. That is why it is in every circuit board. No one talks about it, and no one hypes it in this context. It quietly serves as a key input in a larger system.
The most enduring AI companies currently being built treat models as components—inputs to products solving real problems—rather than treating the models themselves as products. AI is the gold in the circuit board, not the gold in the display case.
The actual operation works like this: you select a field with real pain points, genuine workflow complexity, and hard-to-obtain data, then build a product that just happens to use models to make it much better. AI is the implementation detail; the product is what replaces the painful manual process.
This is the opposite of "we built a shell on top of GPT-4." The shell is the display case; the circuit board is invisible.
Recently eliminated tracks
To be clear, here are some categories of startups that laboratories have been systematically consuming since the end of 2024:
Agent orchestration frameworks. Now native features of OpenAI Agents SDK, Anthropic toolchain, and Google Vertex Agent Builder.
AI programming assistants. OpenAI’s Codex can now perform complete repository-level autonomous programming. Claude Code can as well. GitHub Copilot is Microsoft’s native solution. The independent track focused solely on programming assistants has been significantly compressed.
Browser and computer automation. OpenAI’s Operator, Anthropic’s Computer Use, Google’s Gemini Astra. All three leading laboratories now have products in this direction. All startups using LLM for RPA are now on defense.
RAG pipelines and vector search tools. Essentially commoditized. Most model APIs have built-in native retrieval capabilities. Differentiation at the framework level has disappeared.
General AI assistants and productivity tools. Have been directly consumed by Claude, ChatGPT, and Gemini.
Prompt management and evaluation tools. Increasingly becoming native features. LangSmith still has some space, but that is a race against time.
The pattern is very consistent: laboratories discover that a category has garnered significant developer interest, determine that it is closely adjacent to their core product, and then release a version. It might not necessarily be better, but integration, being cheaper by default, and distribution capabilities are unmatched by startups.
What should you do now?
If you are currently in the AI startup space, the question to ask is not "is there demand." There is demand everywhere. The question to ask is: will this thing be obliterated by a product released by a laboratory with over $10 billion in a bank?
If the answer is "yes" or even "maybe," then it is not a business; it is a feature.
Lasting strategies possess the following characteristics: deep vertical specificity (laboratories can do general things but not your type of general), proprietary data or relationships that cannot be copied by crawling the open web, regulatory and compliance complexities that make "just calling the API" inadequate, and distribution channels in communities where trust and local context are more important than raw capabilities.
The gold rush is real. There’s gold everywhere. But miners are now opening store fronts, and they have infinite capital.
Sell jewelry. Treat gold as an industrial raw material. Create things that miners themselves are not interested in making—because they are too niche, too localized, and too deeply embedded in domains of knowledge they will never own.
This is what I consider the right strategy.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。