Original Author: Matt Liston
Original Translation: AididiaoJP, Foresight News
In November 2024, the prediction market anticipated the election results before anyone else. While polls showed a tight race and experts were hesitant, the market gave Trump a 60% chance of winning. When the results were revealed, the prediction market outperformed the entire prediction establishment—polls, models, expert judgments, everything.
This demonstrated that markets can aggregate dispersed information into accurate beliefs, and the risk-sharing mechanism came into play. Economists have dreamed since the 1940s of speculative markets surpassing expert predictions, and today that dream has been validated on the grandest stage.
But let’s delve deeper into the economic principles at play.
Betters on Polymarket and Kalshi provide billions of dollars in liquidity. What do they get in return? They generate a signal that the whole world can see instantly for free. Hedge funds observe it, campaign teams absorb it, and journalists build data dashboards around it. No one needs to pay for this information; the bettors are effectively subsidizing a global public good.
This is the dilemma that prediction markets face: the information they generate is also the most valuable part, leaking the moment it is created. Savvy buyers won’t pay for publicly available information. Private data providers can charge hedge funds high fees precisely because their data is not visible to competitors. In contrast, publicly available prediction market prices, no matter how accurate, are worthless to these buyers.
Thus, prediction markets can only exist in areas where enough people want to "gamble": elections, sports, internet meme events. As a result, we end up with a form of entertainment disguised as an information infrastructure. Crucial questions for decision-makers—geopolitical risks, supply chain disruptions, regulatory outcomes, timelines for technological developments—remain unanswered because no one will bet on them for entertainment.
The economic logic of prediction markets is inverted. Correcting this is part of a larger transformation. Information itself is the product; betting is merely a mechanism for producing information, and a limited one at that. We need a different paradigm. Below is a preliminary outline of "Cognitive Finance": an infrastructure redesigned around information itself, starting from first principles.
Collective Intelligence
Financial markets themselves are a form of collective intelligence. They aggregate dispersed knowledge, beliefs, and intentions into prices, coordinating the actions of millions of participants who never communicate directly. This is remarkable but also extremely inefficient.
Traditional markets operate slowly due to constraints of trading hours, settlement cycles, and institutional friction. They can only express beliefs in a coarse manner through prices. What they can represent is very limited, merely the space of tradable propositions, which is trivial compared to the space of issues that truly matter to humans. Additionally, participants are severely restricted: regulatory barriers, capital requirements, and geographical constraints exclude the vast majority of people and all machines.
The emergence of the crypto world is beginning to change this, including continuous markets, permissionless participation, and programmable assets. Modular protocols that can be combined without central coordination. DeFi (Decentralized Finance) has proven that financial infrastructure can be rebuilt as open, interoperable foundational components that arise from the interaction of autonomous modules rather than the decrees of gatekeepers.
However, DeFi largely replicates traditional finance with better "pipes." Its collective intelligence is still based on prices, focused on assets, and absorbs new information slowly.
Cognitive Finance is the next step: rebuilding intelligent systems themselves for the AI and crypto era, starting from first principles. We need markets that can "think," maintaining probabilistic models about the world, capable of absorbing information with arbitrary granularity, available for AI systems to query and update, allowing humans to contribute knowledge without needing to understand the underlying structure.
The components to achieve this are not mysterious: using private markets to correct economic models, using composite structures to capture correlations, using agent ecosystems to scale information processing, and using human-machine interfaces to extract signals from human brains. Each part can be built today, and when combined, they will create something new with transformative significance.
Private Markets
If prices are not public, economic constraints can be resolved.
A private prediction market only allows entities that subsidize liquidity to see prices. This entity thus gains exclusive signals, proprietary intelligence, rather than a public good. In this way, the market suddenly becomes viable for any question where "someone needs an answer," regardless of whether anyone is willing to bet for entertainment.
I discussed this concept with @_Dave_White_.
Imagine a macro hedge fund that wants continuous probability estimates about Federal Reserve decisions, inflation outcomes, and employment data as decision signals rather than betting opportunities. As long as the intelligence is exclusive, they are willing to pay for it. A defense contractor wants a probability distribution of geopolitical scenarios, and a pharmaceutical company wants predictions on regulatory approval timelines. However, these buyers do not exist today because once information is generated, it immediately leaks to competitors.
Privacy is the foundation for economic models to hold. Once prices are public, information buyers lose their advantage, competitors start to free ride, and the entire system regresses to relying solely on entertainment demand.
Trusted Execution Environments (TEE) make all this possible; they are secure computing enclaves where the computation process is invisible to the outside world (even to system operators). The market state exists entirely within the TEE. Information buyers receive signals through verified channels. Multiple non-competing entities can subscribe to overlapping markets; layered access windows can balance information exclusivity with broader distribution.
TEEs are not flawless; they require trust in hardware manufacturers. However, they can provide sufficient privacy guarantees for commercial applications, and the relevant engineering technologies are now quite mature.
Composite Markets
Current prediction markets treat events as isolated from one another. "Will the Federal Reserve cut rates in March?" in one independent market. "Will inflation exceed 3% in the second quarter?" in another market. A trader who understands the inherent connections between these events—such as knowing that high inflation may increase the probability of a rate cut, or strong employment may decrease it—must manually arbitrage between these disconnected pools of capital, trying to reconstruct the correlations that have been disrupted by the market structure itself.
It's like building a brain where each neuron can only fire in isolation.
Composite prediction markets, on the other hand, maintain a "joint probability distribution" over combinations of multiple outcomes. A trade expressing "interest rates remain high and inflation exceeds 3%" will create ripples across all related markets in the system, synchronously updating the entire probability structure.
This is similar to how neural networks learn: during training, each gradient update simultaneously adjusts billions of parameters, and the entire network responds holistically to each piece of data. Similarly, every trade in a composite prediction market updates its entire probability distribution, with information propagating through the correlation structure rather than merely updating isolated prices.
What emerges is a "model," a probability distribution that continuously updates over the state space of world events. Each trade optimizes this model's understanding of the correlations between things. The market learns how the real world is interconnected.
Intelligent Ecosystem
Automated trading systems have already taken dominance on Polymarket. They monitor prices, discover mispricings, execute arbitrage, and aggregate external information at speeds far beyond any human.
Current prediction markets are designed for human bettors using web interfaces. Agents are "reluctantly" participating under this design. An AI-native prediction market would completely reverse this logic: agents become the primary participants, while humans are integrated into the system as sources of information.
There is a crucial architectural decision: there must be complete isolation. Agents that can see prices must never be information sources; and agents responsible for acquiring information must never have access to prices.
Without this "wall," the system will self-cannibalize. An agent that can both acquire information and observe prices can reverse-engineer what information is valuable from price movements and then seek it out themselves. In this way, the market's own signals become a "treasure map" guiding others. The act of information acquisition would degrade into a complex form of "front-running." The isolation mechanism ensures that information-acquiring agents can only profit by providing genuinely novel and unique signals.
On one side of the "wall": there are trading agents competing in complex composite structures to identify mispricings; and evaluation agents that assess incoming information through adversarial mechanisms, distinguishing between signals, noise, and manipulation.
On the other side of the "wall": there are information-acquiring agents that operate entirely outside the core system. They monitor data streams, scan documents, and engage with individuals possessing unique knowledge—feeding information unidirectionally into the market. When their information proves valuable, they can receive compensation.
Compensation flows backward along the chain. A profitable trade rewards the executing agent, the evaluating agent of that information, and the acquiring agent that initially provided the information. This ecosystem thus becomes a platform: on one hand, allowing highly specialized AI agents to monetize their capabilities; on the other hand, serving as a foundational layer for other AI systems to gather intelligence to guide their actions. The agents are the market itself.
Human Intelligence
A vast amount of the world's most valuable information exists only in human minds. For example, engineers who know their product's progress is already behind; analysts who detect subtle shifts in consumer behavior; observers who notice details that even satellites cannot see.
An AI-native system must be able to capture these signals from human brains without being overwhelmed by massive noise. Two mechanisms make this possible:
Agent mediation participation: allowing humans to "trade" without seeing prices. A person only needs to express beliefs in natural language, such as "I think the product launch will be delayed." A dedicated "belief translation agent" will parse this prediction, assess its confidence, and ultimately convert it into a position in the market. This agent coordinates with the system that has access to prices to construct and execute orders. Human participants will only receive rough feedback: "Position established" or "Insufficient edge." Compensation is settled based on prediction accuracy after the event, with price information kept confidential throughout.
Information markets: allowing information-acquiring agents to pay directly for human signals. For example, an agent wanting to understand a tech company's profitability can identify an engineer with relevant insider knowledge, purchase an evaluation report from them, and subsequently verify and compensate based on the value of that information in the market. Humans are rewarded for their knowledge without needing to understand the complex market structure.
Take analyst Alice as an example: she believes, based on her professional judgment, that a certain merger will not pass regulatory approval. She inputs this viewpoint through a natural language interface, and her "belief translation agent" parses the prediction, assesses her confidence from the linguistic details, checks her historical record, and constructs an appropriate position without ever seeing the price. A "coordinating agent" at the TEE boundary determines whether her viewpoint has an informational advantage based on the current market-implied probabilities and executes the trade accordingly. Alice will only receive notifications of "Position established" or "Insufficient edge." The price remains confidential at all times.
This architecture views human attention as a scarce resource that needs to be carefully allocated and fairly compensated, rather than a public resource that can be mined at will. As these interactive interfaces mature, human knowledge will become "liquid": the information you know will flow into a global reality model and be rewarded when proven correct. Information trapped in minds will no longer be confined.
Future Vision
By pulling the vision far enough, we can glimpse where all this will lead us.
The future will be an ocean composed of fluid, modular, and interoperable relationships. These relationships will spontaneously form and dissipate between human and non-human participants, without any central gatekeepers. This is a form of "fractal autonomous trust."
Agents negotiate with agents, humans contribute knowledge through natural interfaces, and information continuously flows into a perpetually updated reality model that anyone can query, but no one can control.
Today's prediction markets are merely a primitive sketch of this vision. They validate the core concept (that risk-sharing can generate accurate beliefs), but they are trapped in flawed economic models and incorrect structural assumptions. Sports betting and election forecasting in cognitive finance are akin to ARPANET (the early internet) in relation to today's global internet: it is a "proof of concept" mistakenly regarded as the ultimate form.
The true "market" is every decision made under uncertainty, which means almost all decisions. The value of reducing uncertainty in supply chain management, clinical trials, infrastructure planning, geopolitical strategy, resource allocation, and personnel decisions far exceeds the entertainment value of betting on sports events. We simply have not yet built the infrastructure capable of capturing this value.
What is coming is the "OpenAI moment" in the cognitive domain: an infrastructure project on a civilizational scale, but its goal is not individual reasoning, but collective belief. Large language model companies are building systems that "reason" from past training data; cognitive finance aims to build systems that "believe"—capable of maintaining calibrated probability distributions about the state of the world, continuously updated through economic incentives (rather than gradient descent), and integrating human knowledge with arbitrary granularity. LLMs encode the past; prediction markets aggregate beliefs about the future. The combination of the two can form a more complete cognitive system.
When fully expanded, this will evolve into an infrastructure: AI systems can query it to understand the uncertainties of the world; humans can contribute knowledge to it without needing to understand its internal mechanisms; it can absorb local knowledge from sensors, domain experts, and cutting-edge research, and synthesize it into a unified model. A self-optimizing, predictive world model. A foundation where uncertainty itself can be traded and combined. The intelligence that ultimately emerges will surpass the sum of its parts.
The computer of civilization is precisely the direction that cognitive finance strives to build.
Stakes
All the pieces of the puzzle are in place: the capabilities of agents have crossed the threshold for prediction; confidential computing has moved from the lab to production environments; prediction markets have proven large-scale product-market fit in the entertainment sector. These clues converge at a specific historic opportunity: to build the cognitive infrastructure needed for the age of artificial intelligence.
Another possibility is that prediction markets remain forever at the entertainment level, performing accurately during elections but ignored at other times, never able to touch on the truly important issues. At that point, the infrastructure on which AI systems rely to understand uncertainty will not exist, and the valuable signals trapped in human minds will remain forever silent.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。