Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Explore the technological advantages of Cerebras Systems (CBRS), valuation dynamics, and broader investment implications.

CN
Foresight News
Follow
39 minutes ago
AI summarizes in 5 seconds.
Circular wafer-level chips challenge NVIDIA, targeting a fair value of $192?

Written by: Noteflipper

Translation: AididiaoJP, Foresight News

Cerebras Systems (NASDAQ: CBRS) is a unicorn company focused on wafer-level AI chips. Its WSE-3 chip uses a single wafer as a complete computing unit, demonstrating significant energy efficiency and performance advantages in large model inference and training. On May 13, 2026, the company officially priced its IPO at $185 per share and was listed on the 14th, surging more than 60% on the first day, becoming one of the most anticipated AI hardware IPOs in 2026.

In the weeks leading up to the IPO, the crypto industry had already enthusiastically participated: Crypto platforms led by Trade.xyz (based on Hyperliquid) launched the CBRS Pre-IPO perpetual contract (IPOP), allowing retail investors to trade on-chain in advance of its stock price. This product went live as early as early May, with trading prices significantly exceeding the final IPO pricing, reflecting the market's fervent enthusiasm for AI hardware stocks. Currently, crypto platforms primarily offer Cerebras Pre-IPO exposures through such synthetic perpetual products rather than trading real shares.

Summary

Cerebras Systems positions itself as a differentiated pure AI infrastructure provider, with its core product being the single-chip wafer-level engine (WSE-3) architecture. The company is expected to price its IPO on May 13 and begin trading on NASDAQ around May 14, which will become one of the largest and most over-subscribed tech IPOs in the U.S. in 2026. With over 20 times oversubscription, pre-marketing subscription interest exceeding $10 billion, and a multi-year $24.6 billion milestone agreement supported by its iconic agreement with OpenAI, it has very strong momentum.

Note: Cerebras Systems started trading on NASDAQ around May 14, reaching a high of $386.34

The core investment thesis is that Cerebras can eliminate bottlenecks in inter-chip communication, delivering significant cost and energy efficiency improvements for large-scale AI workloads (especially inference). Meanwhile, the company faces typical execution, customer concentration, and competitive risks in a market still dominated by NVIDIA. The post-IPO valuation could be in the mid $30 billion range or higher, indicating a premium multiple on current revenue but supported by visible agreement growth. This memorandum synthesizes the discussed technological, financial, and market factors, expanding the perspective to macro AI infrastructure trends.

Company and Technology Overview

Cerebras designs and manufactures the world's largest AI chips and systems. Its flagship product, the WSE-3, treats an entire 300mm silicon wafer as a single computing unit (40 trillion transistors, 44 GB on-chip SRAM, 21 PB/s memory bandwidth). This architecture fundamentally changes the relationship between computation and memory, differing from traditional GPU clusters: data movement is retained on the chip, greatly reducing latency, power consumption, and complexity.

Compared to NVIDIA (e.g., B200/H100 clusters), its key claimed advantages include:

  • Inference speeds up to 21 times on models like Llama 3.1 70B or cutting-edge open-source LLMs.
  • Rack-level power consumption at about 1/3 for equivalent performance → Approximately 60 times improvement in energy efficiency per token.
  • In large-scale inference, the ability to fit entire models or layers into massive on-chip SRAM increases throughput by 2-3 times and significantly reduces latency, thus eliminating the "memory wall."

Pre-training and reinforcement learning (RL/RLHF) also benefit: in memory-limited workloads, synchronously training large-scale models is faster (time to solution improved up to 10 times), and RL rollout iterations accelerate significantly due to higher inference throughput.

SRAM is not procured as a standalone module but is directly embedded and manufactured on the wafer as part of Cerebras' full-chip design using TSMC's 5nm process. Manufacturing is outsourced to TSMC, and Cerebras does not own its own wafer fab.

IPO Mechanism, Demand, and Capital Structure

Issuance: 28 million Class A shares (underwriter's overallotment option adds another 4.2 million shares), initial price range $115–125. Strong demand has prompted an expected increase in the price range (to $125–135 or possibly $150–160), and the offering size may be increased to about 30 million shares, raising approximately $4–4.8 billion, with a fully diluted valuation possibly exceeding $34 billion.

Demand signals: Oversubscription of more than 20 times, pre-marketing subscription interest exceeding $10 billion. Underwriters require limit orders; even large institutional allocations will be limited. The vast majority of expressed demand will remain on the sidelines during public market trading on the first day—this is typically a setup for significant first-day jumps.

Financial snapshot (2025): Revenue of approximately $510 million (up 76% year-over-year), gross margin of about 39% (approximately 43% for hardware, lower for services/cloud). Non-GAAP operating losses continue to exist; backlog conversion and infrastructure builds will drive short-term cash consumption.

Ownership description: OpenAI holds warrants (33.4–33.5 million Class N non-voting shares, exercise price approximately $0.00001), which vest in phases based on calculated purchase milestones (full vesting tied to a multi-gigawatt commitment until 2030). Intel has not disclosed significant corporate holdings. A standard 180-day lock-up period applies to pre-IPO shares and insiders; OpenAI warrants are milestone-driven and are not immediately locked upon exercise.

Efficiency, Data Centers, and Ecological Impact

Wafer-level design significantly reduces dependence on high-speed interconnects, cabling, switches, and complex rack structures. Most communication occurs on-chip rather than across thousands of independent GPUs. This means:

  • Adverse for high-bandwidth memory (HBM) in inference-intensive deployments, as on-chip SRAM replaces off-chip DRAM traffic. Major HBM suppliers (SK Hynix, Samsung, Micron—accounting for about 97% of the market) may face demand erosion if wafer-level inference scales.
  • Beneficial for TSMC (the foundry benefiting from dense on-chip SRAM/logical integration).
  • Adverse for traditional data center connection suppliers (cable manufacturers, network suppliers, some rack/cooling specialists) in Cerebras' centralized construction, while power delivery and cooling suppliers may benefit from higher density.

For hyperscale cloud operators and cloud service providers, lower power envelope (thus reducing power/cooling operational expenditures) could increase gross/operating margins of equivalent inference workloads by about 10–20%+. In a world constrained by the power grid and rising energy costs, this changes the computing logic of new data center construction: providing more computing power per available MW, faster deployment, and significantly lower TCO.

Bull and Bear Scenarios

Bull Market Scenario

  • Explosive AI demand + differentiated technology = rapid backlog conversion and increased market share in inference and large-scale model training.
  • Energy and TCO advantages become decisive factors in power-constrained environments.
  • Strong IPO jumps in a hot AI equity market + momentum.
  • OpenAI collaboration validates platforms and provides visibility into multi-year revenue.

Bear Market Scenario

  • NVIDIA's ecosystem moat, software stack, and ongoing innovation remain overwhelmingly dominant.
  • Customer concentration risk (OpenAI and a few hyperscale clouds) and uncertainty in agreement timing (power/infrastructure dependence).
  • Gross margins (currently about 39%) lag behind NVIDIA's 75–80%, possibly maintaining in the low to mid 40% range in the short term due to wafer costs and product mix.
  • Execution risk in scaling up manufacturing and commitments to per-volume delivery efficiency.
  • Valuation incorporates aggressive growth; any delivery shortfall could trigger a substantial revaluation.

Valuation Considerations

The implied market value before the IPO ranges from $26 billion to $42 billion (depending on final pricing). Analyst revenue forecasts point to approximately $1.1 billion in 2026 and about $2.3 billion in 2027 (CAGR of 100%+). Traditional forward PEG calculations are not yet possible (non-GAAP operating losses persist), but growth-adjusted multiples are at a premium level—consistent with other high-belief AI infrastructure names. Post-IPO trading will be the final arbiter; observed demand indicates short-term upward pressure, but the lock-up period and milestone-driven OpenAI equity issuance introduces risks in the later stages.

Wider Industry Context and Investment Considerations

AI infrastructure building is entering a new phase. Hyperscale clouds and enterprises face not only capital expenditures but also issues of power availability, cooling limitations, and total cost of ownership. Wafer-level and other "rethink chips" approaches (Cerebras, Groq, potential future entrants) represent credible challenges to GPU-centric models for specific high-value workloads (especially memory-constrained inference and large-scale training/RL). While NVIDIA is expected to maintain the largest market share in the foreseeable future, gradually diversifying to alternative architectures may hold strategic importance for cloud providers seeking cost, power, and performance advantages.

Energy constraints are increasingly becoming binding variables in AI expansion. Regions with cheap or abundant power will have structural advantages; technologies providing more tokens or FLOPs per watt (and per dollar of capital expenditure) will gain pricing power and faster adoption. Increasing regulatory scrutiny over AI energy consumption and data center licensing adds another layer—favorable to efficient players.

In the longer term, the wafer-level argument tests whether the industry will shift towards fewer, larger chips rather than thousands of smaller interconnected GPUs. If successful, Cerebras could carve out a lasting niche; if unsuccessful, it may become a high-cost niche player in a GPU-dominated world. Broader AI sentiment, interest rate trajectories, and any macro factors slowing capital expenditures remain key exogenous variables.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Foresight News

43 minutes ago
Pre-IPO Celebration Before Listing: The Truth Behind the Premium Wave
1 day ago
Cryptocurrency IPO Winter: Consensys and Ledger Withdraw Applications Together
1 day ago
The OpenAI century trial has reached its conclusion, and we have summarized the key issues.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
3 minutes ago
Weekly token unlock: PYTH unlocks up to 37% of circulating tokens.
avatar
avatar链捕手
17 minutes ago
Vitalik: What we need to do is not to fight against AI, but to create a sanctuary.
avatar
avatar链捕手
26 minutes ago
Dialogue Lead Bank Jackie: American Bank Re-Embraces Crypto
avatar
avatarForesight News
43 minutes ago
Pre-IPO Celebration Before Listing: The Truth Behind the Premium Wave
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink