Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Why vertical AI applications are rewriting the rules of the game.

CN
Techub News
Follow
1 hour ago
AI summarizes in 5 seconds.

Authored by: Deep Thought Circle

Do you also believe that OpenAI's trillion-dollar valuation means that general AI will devour the entire economy? If you think so, you may need to reassess. I recently read a deep analysis by Nikhil Davar and Byrne Hobart that completely changed my perspective on the competitive landscape of AI applications. They proposed a highly disruptive viewpoint: the true capture of immense economic value may not come from general AI platforms attempting to become "universal routers," but from specialized AI applications deeply entrenched in specific verticals that exist on the fringes of economic activity.

The case they used to illustrate this viewpoint was astonishing: OpenEvidence, an AI company in the healthcare sector, recently completed financing with a valuation of $12 billion, which doubled from the $6 billion valuation in October last year. More astonishingly, their annualized advertising revenue has reached $150 million, growing at a rate of 30% per month, with profit margins as high as 90%. A company that has been established for less than a few years has aggregated over 50% of doctors in the U.S., with an average daily usage time of 14 minutes. The last time a technology product was adopted so swiftly by the physician community was when Google emerged.

This case prompted me to delve into a pressing question: In the AI era, how will the competitive landscape evolve between general platforms and vertical applications? Why can a specialized AI application focused on healthcare not only survive but also establish an almost unshakeable moat amidst giants like OpenAI, Anthropic, and Google?

What is the Essence of a Router?

Davar and Hobart presented a very interesting conceptual framework a few months ago called "Routers, Apps, AGI." Their core argument is that the true value of an AI chatbot lies not in how many questions it can answer, but in its ability to route queries to any tool that can provide answers—whether that’s another model, a third-party service, a service provider's checkout page, or a consultant's contact information that needs to be hired. Essentially, this is a Hayekian vision: the greatest challenge is the transmission of general information.

My understanding of this viewpoint is that whenever changes occur anywhere in the world, they unpredictably shift people's optimal behavior. You need some kind of system to convey this information to target recipients without overwhelming them with trivial details. The price mechanism is an elegant solution, but on-demand intelligence can operate across more dimensions. In other words, AGI is not a Nobel laureate in a data center but a kind of superhuman coordination technology that represents a certain high-fidelity simulation of the economy itself.

Imagine if we all wore Google Glasses, or further, had some kind of brain-computer interface, then Google’s exclusive access to these high-fidelity real-time sensors would allow it to coordinate vast economic activities by organizing all high-entropy information the moment it is created. OpenAI’s approximately trillion-dollar valuation bets on OpenAI's ability to apply this routing process across a larger economic share, or to execute it more precisely than any other company.

But I think this vision has a key assumption: that general routers can gather enough sensor data and establish sufficient trust for users to be willing to conduct various tasks through them. And Davar and Hobart's article precisely points out the problem with this assumption: there is too much economically valuable yet hard-to-identify "dark matter" that centralized large labs cannot see. Meanwhile, those more focused vertical applications have already begun to identify these economically valuable issues and will continue doing so for a considerable period.

How OpenEvidence Establishes an Unreplicable Moat

The appeal of OpenEvidence’s case lies in its clear demonstration of how vertical edge routers can not only survive but also thrive amidst the giants. Their strategy can be summed up in three words: trust, exclusivity, and compound effects.

Doctors may be one of the most credential-conscious groups in the world, partly because they spend their entire early adulthood acquiring a humanly scarce qualification. A world where a technology replaces qualifications, human expertise, and expert institutions would be very discomforting for doctors. Davar and Hobart suggest a vivid experiment: next time you visit a doctor, ask them about certain health advice you found on Google or ChatGPT and observe their facial expressions and reactions. I have experienced similar situations, and doctors' responses typically fluctuate between skepticism and disdain.

OpenEvidence deeply understands this and has implemented a comprehensive credibility strategy. They explicitly position themselves in contrast to labs trained on the open internet. The training data of those labs includes health blogs, social media, etc.—any treatment marketed with "doctors hate this strange trick" will appear in the broad training dataset. In contrast, OpenEvidence has trained a specialized set of models exclusively based on 35 million peer-reviewed sources, initially starting from public domain materials like FDA, CDC, and PubMed. Their models remain completely disconnected from the public internet during both training and inference.

This means their early system's risk of hallucination is significantly lower than the earlier reasoning paradigms of LLMs, and the product is free, facilitating viral adoption by doctors. A particularly clever aspect is that among those early adopters, several are senior members of the editorial boards of the most prestigious medical journals. This led to the next key link: OpenEvidence has been able to lock in exclusive content partnerships with JAMA, NEJM, NCCN, the American Medical Association, all 11 JAMA specialty journals, the American Academy of Family Physicians, the American College of Emergency Physicians, and more.

Here’s a particularly interesting detail. OpenEvidence's CEO Daniel Nadler provided some background: some well-funded AI companies have poured significant funds into NEJM, but they were turned down. If NEJM were a private company, they might have agreed, but as a non-profit organization, they refused because the Massachusetts Medical Society, as a non-profit, cares more about the sanctity and integrity of its nonprofit mission rather than just seeking a quick commercial contract. In fact, NEJM proactively reached out to OpenEvidence rather than the other way around: "In our case, we didn’t show up at their door. Many senior figures on the New England Journal of Medicine editorial board are heavy users of OpenEvidence, and they want their content to appear in the tools they are using."

I believe this case reveals a profound insight: in certain verticals, credentialing and trust are not merely marketing tools; they are core components of the product itself. General AI platforms cannot replicate this because their value proposition is fundamentally generality and convenience, not deep specialization and credibility in particular fields.

The Concept of Dark Matter: Value That Can Only Be Created, Not Discovered

The part of Davar and Hobart's article that struck me most was the discussion about "dark matter." This is not the dark matter of physics but refers to context information that is economically highly valuable yet difficult to identify. The dark matter created by OpenEvidence is the clinical uncertainty faced by doctors based on high-entropy, special patient situations.

Here is a key cognitive shift: this dark matter is not discovered but created. It is created entirely because of the existence of trust. Centralized routers cannot replicate this by providing superior general intelligence because doctors will not generate context on platforms they do not trust. You can model the collection of content doctors inquire about or disclose to OpenEvidence as the exact collection of content they are very hesitant to ask ChatGPT about: lack of trust generates extremely high verification costs, and not validating the highly risky outputs from untrusted sources means that valuable context simply will not be generated.

My understanding of this is that it completely subverts common assumptions about context and AI. The default mental model is one of discovery: valuable information exists somewhere in the world, and the role of sensors is to find it, capture it, and relay it back to the router. But OpenEvidence's service is closer to selling confirmation of informed guesses, along with the documentation supporting it.

The cognitive pathways of doctors—streaming real-time data about patient symptoms, diagnostic results, and histories, and distilling them into clinical hypotheses, especially their doubts about those hypotheses ("clinical uncertainty")—have previously not existed in any other system, be it local, cloud, or on paper. Perhaps, at some point, when doctors queried another doctor they trust for advice on a specific patient scenario, it existed as a wave. No one can feasibly survey the diagnostic uncertainties of doctors in real time and on a large scale; even the best doctors won’t fill out surveys, and even if they do, the act of filling out a survey materially differs from the contextual disclosures flowing from real, novel patient cases under real stress and uncertainty.

Companies like Mercor, Surge, and Scale are trying to replicate this for large labs, but the quality is not the same; a lot of decent input cannot compensate for the quality of the best inputs: those companies hiring doctors to provide and evaluate answers for general AI tools are hiring doctors who have not made a fortune using specialized AI tools, potentially leading to adverse selection. This is hard to change because of the time-value of money. Companies like Mercor, Surge, and Scale pay you to train a model, and the output of that training will have value at some point in the future. Patients or insurance companies are already paying today for doctors' outputs, and those outputs are (at least theoretically) of immense value today.

This insight has led me to rethink the sources of competitive advantage in AI applications. It is not about who has the larger model or more computational resources, but rather who can create an environment where users are willing to disclose their most valuable thoughts and uncertainties. This value creation is relational and interactive and cannot be replicated merely through data scraping or model upgrades.

Five Value Dimensions of Vertical Edge Routers

Davar and Hobart proposed a very useful framework for understanding the value of routers. They consider the value of routers to transcend raw intelligence, functioning as some multiplicative factor of several elements: the absolute number of problems solved, the economic value of these individuals, the relative economic value of the problems being solved for them, the proportion of information required to solve the problems emerging from users, and how completely you can solve these problems.

OpenEvidence has exceptional advantages on all five dimensions. They solve problems for doctors (the highest-paid professionals in the U.S.) and for many doctors: as of last month, over 50% of doctors in the U.S., or 600,000 people, were using OpenEvidence, with an average usage of 14 minutes per day. They address the most economically valuable problems faced by doctors: clinical decision-making, which involves real-time diagnostics and treatment for patients amid uncertainty. They also offer the most complete solution so far: evidence based on the most prestigious medical journals, a perfect score on the USMLE (United States Medical Licensing Examination), helping doctors match patients to potentially life-saving clinical trials, providing context-aware treatment pathways, medications, medical devices, and more that are most likely to resolve patient issues and assist doctors in their work.

What I particularly appreciate about this framework is that it emphasizes the importance of "completeness." Many AI applications merely provide information or suggestions but cannot genuinely execute or complete tasks. OpenEvidence, on the other hand, is continuously expanding its actuator capabilities. Their primary actuator today is providing advertising (routing doctors' attention to pharmaceutical companies), but clinical trial matching represents a completely different and more valuable actuator (routing patients to trials). The next logical actuator could be prior authorization automation (routing payments), and it’s hard to see where this will stop: each new actuator expands the collection of solutions OpenEvidence can access and execute while keeping the context dark from centralized routers.

This leads me to think that the true moat of AI applications may not lie in the AI technology itself but in the comprehensive value chain they can establish. From information to suggestions, from suggestions to execution, and from execution to outcome verification, each step deepens the relationship with users, creating more dark matter and attracting more solution providers. This is a self-reinforcing flywheel, and general AI platforms struggle to establish such a flywheel across all verticals simultaneously.

Why General AI Cannot Win Everything

The biggest inspiration I drew from this article was its challenge to the mainstream narrative that "general AI will win everything." With OpenAI, Anthropic, and Google all launching healthcare products, why can OpenEvidence retain its persistence? The answer boils down to compound trust and the capabilities that arise from this trust.

The article pointed out a profound epistemological failure mode, which I believe is worth contemplating deeply. The entire intelligence theory of large labs presupposes that more data and more computation yield more capable, economically valuable systems across all domains. Their business models, capital expenditure strategies, and investor narratives fervently hope this is true. OpenEvidence's success offers a striking counterexample: it has created tremendous economic value using specialized models trained on far less data. Large labs find it hard to admit that in one of the most economically valuable domains, the right type of less data is superior to more data of every type.

To some extent, acknowledging this would challenge their entire strategy or at least imply that they may be asking the wrong questions. It indicates that rather than achieving one major victory from a better model, it is better to secure N wins across N topics where sufficient training data generates specialized models, none of which fully warrant the headlines. At this point, their business is closer to the business of Bloomberg or FactSet: gathering and cleaning data still generates significant revenue (and profits!), but it does not scale like general intelligence products.

I find this observation extremely insightful. It suggests that the future of AI may not be dominated by a unified general intelligence platform but rather by innumerable vertically specialized AI applications, each establishing deep moats in their respective fields. The total value of these applications may far exceed that of any single general platform, as they penetrate more deeply into various corners of economic activity, creating and capturing dark matter that general platforms cannot reach.

The Flywheel Effect: How Trust Creates Unreplicable Advantages

The success of OpenEvidence can be summarized as a powerful flywheel: exclusive credentialed ground truth makes sensors trustworthy → trust makes potential dark matter identifiable (over 50% of U.S. doctors disclose their clinical uncertainties daily because they trust the sensors) → identifiable dark matter is privately monetized without leaking to centralized routers (pharmaceutical companies pay $70-150 CPM for access to doctors' highest intention moments) → increasing numbers of solutions, from clinical trial patient recruitment to prior authorization and medical device discovery, continually join in, compounding the generation and capture of dark matter.

As more edge/industry-specific issues are identified and accurately and completely solved through an expanding solution ledger, a new signal is created and compounded: validated outcomes in the field. These validated outcomes (how effective certain matches for solving certain problems are) can enhance edge routers via reinforcement learning. This is an uncontested advantage that is difficult to replicate without the preceding steps and the time required to mature the process.

I want to particularly emphasize the importance of the time dimension. This is not an advantage that can be quickly replicated through increased resource investment. Trust takes time to build, exclusive partnerships need time to accumulate, user behavior data requires time to settle, and outcome validation needs time for observation. Even if OpenAI decided tomorrow to fully enter the healthcare sector, they could not quickly gain exclusive partnerships with NEJM, gain the immediate trust of the physician community, or quickly accumulate the dark matter generated from the daily usage of 600,000 doctors for 14 minutes.

This reminds me of the concept of compounding in investment. OpenEvidence is compounding its advantages every day: more usage generates more data, more data attracts more partners, more partners provide better solutions, and better solutions attract more users. Once this flywheel starts turning, it creates tremendous momentum that is hard to break.

Clinical Trial Matching: A Typical Case of Actuator Expansion

One specific example mentioned in the article particularly underscores this point: OpenEvidence has just launched clinical trial matching and patient recruitment features. Pharmaceutical companies currently pay billions of dollars annually to CROs (clinical research organizations) to recruit patients for clinical trials and run these trials, which is very slow and inefficient. If OpenEvidence can fill phase three trials faster than CROs and match better patients, pharmaceutical companies will reap significant benefits. The quicker the recruitment period, the sooner the trials start, the better the patients, and the higher the likelihood of trial success.

A faster, more likely successful trial means drugs enjoy monopolistic profits under patent protection for a longer period. Specifically, the average pharmaceutical company currently spends about $2 billion annually just on patient recruitment (about $40,000 per patient), with 80% of trials being delayed. Each day of delay, depending on the drug, results in lost revenue ranging from $600,000 to $8 million. For blockbuster drugs (like GLP, Keytruda, etc.), the daily value under patent protection is about $8 million. Pharmaceutical companies are willing to pay well above the $70-150 CPM price to accelerate this process.

This example profoundly clarifies the concept of actuator expansion. OpenEvidence's main actuator today is providing advertising, but clinical trial matching is a completely different and more valuable actuator. The next logical actuator could be prior authorization automation; it’s hard to see where this will end. Each new actuator expands the collection of solutions that OpenEvidence can access and execute while remaining dark to centralized routers regarding the context.

I believe this reveals a key advantage of vertical AI applications: they can naturally expand along the value chain because they are deeply embedded in specific industry ecosystems. General AI platforms may offer similar functionalities at a surface level, but they cannot penetrate the core processes and decision-making points of industries as deeply as vertical applications.

The Middle Game: The Era of Edge Routers

Davar and Hobart use the term "middle game" to describe the current state. Theoretically, over time, labs and centralized economic world models should be able to acquire everything, but there exists a middle game that is well worth delving into—especially as the rules of the game are gradually being revealed.

My understanding of this "middle game" is that we are in a transitional phase where the vision of general AI has not yet been fully realized, but vertical AI applications are already creating tangible value. This transition could last a long time, long enough for companies that build strong moats in vertical domains to grow into unshakeable giants.

The conclusion of the article presents a generalized argument that I find very convincing: wherever there exist two (or more) economically valuable yet difficult-to-identify context pools, and centralized routers cannot bridge (or are not trusted to bridge) them, edge routers have the opportunity to create trusted sensors, generating and capturing dark matter and making it identifiable to willing market participants.

This framework can be applied to many other verticals. Are there similar dark matters in the legal field? In the finance field? In education? In manufacturing? I believe the answer is a resounding yes. In each of these fields, there exist values that only deeply embedded specialized AI applications can capture.

My Reflection: The Dialectic of General vs. Specialized

After reading this article, I gained a fresh perspective on the competitive landscape of AI applications. I used to tend to believe the narrative that "general AI will dominate everything," but now I realize that might be overly simplistic. The reality may be closer to a multi-layered ecosystem: general AI platforms provide foundational capabilities, but the real value capture occurs at the level of vertical applications.

The case of OpenEvidence particularly made me acutely aware that in certain fields, trust and expertise cannot be bridged by more powerful technical capabilities. Doctors will not start disclosing their clinical uncertainties to ChatGPT merely because it is smarter. Similarly, lawyers will not begin relying on general AI to make critical decisions simply because it can understand legal texts. In these high-risk, high-expertise domains, building trust requires not just technical capabilities but also deep industry expertise, compliance, and integration with industry standards and practices.

I also started to consider what this means for entrepreneurs. If you are building AI applications, perhaps you should not aim to become the next OpenAI but find a sufficiently large and valuable vertical where you can establish a deep moat. The key is to identify those areas where a lot of "dark matter" exists—valuable contextual information that is difficult for general platforms to capture.

From an investment perspective, this also shifted my view on the valuation of AI companies. OpenEvidence's $12 billion valuation may seem high by traditional software company standards, but if you understand the flywheel effect and the depth of the moat they have built, the valuation begins to make sense. They are not just a software tool; they are a critical node in the medical decision-making ecosystem capable of creating and capturing value that no one else can touch.

Finally, I want to say that the most valuable aspect of this article is not its prediction of what will happen in the future but the clear framework it provides to think about the sources of competitive advantage in AI applications. Whether you are an entrepreneur, investor, or practitioner, understanding the concept of "edge router," the nature of "dark matter," and the irreplaceability of trust and expertise in certain verticals will help you make better judgments in this rapidly evolving field.

The future of AI may not be a winner-takes-all game but an ecosystem that includes both general platforms and countless vertical applications. In this ecosystem, real value capture may occur in vertical applications that can establish deep trust in specific fields, continuously create and capture dark matter, and expand their actuator capabilities. This is an exciting time as it means opportunities lie not just with the most resource-rich tech giants but also with specialized teams that can deeply understand the needs of specific fields and build genuine trust relationships.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

停战利好来!币安注册领100USDT
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

33 minutes ago
Circle fell by 30%. Why am I not rushing to buy the dip?
1 hour ago
With the high yield of USD1, what is WLFI planning with USD1?
1 hour ago
5.2 Neutral signal! BTC 70,000 defense battle, the key change will be tonight.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
10 minutes ago
Starting from the cryptocurrency circle, why is Hermes Agent positioned to be the biggest challenger to OpenClaw?
avatar
avatar律动BlockBeats
20 minutes ago
Today announced | "Super Creator Live" guest demo full lineup revealed
avatar
avatarTechub News
33 minutes ago
Circle fell by 30%. Why am I not rushing to buy the dip?
avatar
avatarOdaily星球日报
38 minutes ago
Is Iran just talking tough by paying the Strait's tolls with Bitcoin?
avatar
avatar律动BlockBeats
44 minutes ago
30 Days of AI Practice by a Climbing Gym Owner
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink