Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Understanding the Profit Pools and Industrial Landscape of AI Storage Hierarchies in One Article

CN
PANews
Follow
4 hours ago
AI summarizes in 5 seconds.

Author: Godot

AI storage can be divided into six layers,

1) On-chip SRAM

2) HBM

3) Mainboard DRAM

4) CXL pooling layer

5) Enterprise-level SSD

6) NAS and cloud object storage

This hierarchy is based on the location of the storage; the further down you go, the more distant it is from the computing units, and the greater the storage capacity.

By 2025, the total market for these six layers (excluding embedded value for SRAM on computing chips) is approximately $229 billion, with DRAM accounting for half, HBM for 15%, and SSD for 11%.

In terms of profit, each layer is extremely oligopolistic, with the top three generally holding over 90% market share.

These profit pools can be divided into three categories,

1) High-margin oligopoly pool at the silicon layer (HBM, embedded SRAM, QLC SSD)

2) High-margin emerging pool at the interconnect layer (CXL)

3) Scaled compound pool at the service layer (NAS, cloud object storage)

Each type of pool has different characteristics, growth rates, and moats.

Why is storage layered?

Because the CPU in charge of control and the GPU responsible for computation only have temporary caches, which means on-chip SRAM. This cache space is too small to store large models, only temporary parameters.

Outside these two chips, larger external memories are needed to store large models and inference contexts.

While computation is fast, data transportation between different storage layers incurs delays and energy consumption, which is the main issue.

So currently, there are three directions,

1) Stacking HBM close to GPUs to shorten transportation distance

2) CXL pooling memory at the rack level, sharing capacity

3) Computing and storage integrated on the same wafer, unified compute-storage

These three directions will shape the profit pools of each layer in the next five years.

Here is the specific layering,

L0 On-chip SRAM: A profit pool exclusive to TSMC

SRAM (Static Random-access Memory) is the cache inside CPUs/GPUs, embedded in each chip, and is not traded independently.

The independent SRAM chip market only has a scale of $1 to $1.7 billion, with leaders including Infineon (about 15%), Renesas (about 13%), and ISSI (about 10%), making the market small.

This part of the profit pool is within TSMC; for each generation of AI chips, more wafers must be purchased to fit more SRAM.

Over 70% of the world's advanced process wafers are held by TSMC. Every H100, B200, TPU v5, etc., SRAM area ultimately turns into revenue for TSMC.

L1 HBM: The largest profit pool of the AI era

HBM (High Bandwidth Memory) vertically stacks DRAM (Dynamic Random-access Memory) using TSV technology and packages it next to the GPU for high-bandwidth memory.

HBM almost single-handedly determines how large a model AI accelerators can run. SK Hynix, Micron, and Samsung have nearly 100% market share.

As of Q1 2026, the latest market share distribution is: SK Hynix at 57% to 62%, Samsung at 22%, and Micron at 21%. SK Hynix has secured a substantial share of purchases from companies like NVIDIA, making it the current dominant supplier.

Micron mentioned in its Q1 2026 earnings call that the HBM total addressable market (TAM) is expected to grow at a compound annual growth rate (CAGR) of about 40%, from approximately $35 billion in 2025 to $100 billion by 2028, which is two years earlier than previous forecasts.

The core advantage of HBM lies in its extremely high profit margin. In Q1 2026, SK Hynix achieved a record operating profit margin of 72%.

Reasons for high profits include,

1) TSV manufacturing processes sacrifice some traditional DRAM production capacity, keeping HBM in a state of supply-demand imbalance;

2) Yield improvement in advanced packaging is difficult; Samsung's previous market share fell from 40% to 22% due to this;

3) Major suppliers are cautious about expanding capacity and have seen average selling prices (ASP) for DRAM increase more than 60% quarter-over-quarter in Q1 2026, showing a clear seller's market position.

Among the three giants, SK Hynix is strongly driven by HBM, with its full-year operating profit in 2025 reaching 47.21 trillion won, surpassing Samsung Electronics for the first time in history, and in Q1 2026, with a 72% operating profit margin, even exceeding TSMC (58.1%) and NVIDIA (65%) profit levels.

Micron has extremely high growth expectations, with Bank of America (BofA) significantly raising its target price to $950 in May 2026. Samsung has the largest room for market share recovery as it advances the mass production of HBM4.

L2 Mainboard DRAM

This layer is what we usually refer to as memory sticks.

Mainboard DRAM includes DDR5, LPDDR, GDDR, MR-DIMM, and other conventional memory products, and is currently the highest market sales share part of the AI storage system, with the total global DRAM market reaching approximately $121.83 billion in 2025.

Samsung, SK Hynix, and Micron still occupy the vast majority of the market. According to the latest data from Q4 2025, Samsung ranks first with a market share of 36.6%, SK Hynix ranks second at 32.9%, and Micron ranks third at 22.9%.

Current capacity has shifted towards higher-margin HBM, maintaining high profits and pricing power for memory. Although the unit profit margin for conventional mainboard DRAM is not as high as HBM, its overall market scale is the largest.

L3 CXL Pooling Layer

CXL (Compute Express Link) allows DRAM to be "pooled" from a single server motherboard to an entire rack.

After CXL 3.x, all memories in a future cabinet can be shared and scheduled by multiple GPUs, allocated on demand. This solves the issues of KV cache, vector databases, and RAG indexes in AI inference when the memories cannot hold or transfer data.

CXL memory modules are expected to reach only $1.6 billion in 2024, increasing to $23.7 billion by 2033. It seems that the oligopoly pattern of Samsung, SK Hynix, and Micron continues.

In this layer, Astera Labs produces retimers and smart memory controllers between CXL and PCIe, capturing about 55% of this sub-market. In the latest quarter, revenues reached $308 million, up 93% year-on-year, with a non-GAAP gross margin of 76.4% and a net profit increase of 85% year-on-year, indicating substantial profits.

L4 Enterprise-level SSD: The biggest beneficiary of the inference era

Enterprise-level NVMe SSDs are the main battleground for AI training checkpoints, RAG indexes, KV cache offload, and model weight caching. QLC high-capacity SSDs have completely pushed HDDs out of the AI data lake.

By 2025, the enterprise SSD market is expected to be about $26.1 billion, with a CAGR of 24%, reaching $76 billion by 2030.

The landscape is still dominated by the three giants.

Market share based on Q4 2025 revenues shows Samsung at 36.9%, SK Hynix (including Solidigm) at 32.9%, Micron at 14.0%, Kioxia at 11.7%, and SanDisk at 4.4%. The top five together account for about 90%.

The biggest change in this layer is the explosion of QLC SSDs in AI inference scenarios. Solidigm, a subsidiary of Hynix, and Kioxia have already produced products with a single-disk capacity of 122 TB, while AI inference's KV cache and RAG indexes are shifting from HBM to SSD.

From a profit pool perspective, enterprise SSDs do not have the extreme profit margins of HBM but benefit from both capacity-driven and inference expansion dual dividends.

Hynix and Kioxia are relatively pure targets. Samsung and SK Hynix, on the other hand, enjoy benefits from HBM, DRAM, and NAND across three layers, making them more comprehensive AI storage platform companies.

L5 NAS and Cloud Object Storage: The compounding pool of data gravity

NAS and cloud object storage are the outermost layer for AI data lakes, training corpora, backup archiving, and cross-team collaboration. In 2025, NAS is expected to be about $39.6 billion (CAGR 17%), and cloud object storage about $9.1 billion (CAGR 16%).

The main vendors in enterprise file storage are NetApp, Dell, HPE, Huawei; while small and medium enterprises include Synology and QNAP. Based on IaaS share calculations, AWS accounts for about 31–32%, Azure about 23–24%, and Google Cloud about 11–12%, with the three together making up approximately 65–70%.

The profits in this layer mainly come from long-term hosting + data outflow + ecosystem lock-in.

To summarize,

1) DRAM has the largest market but the lowest profit margin of 30–40%; HBM has only a third of DRAM's market size but double the profit margin of 60%+; CXL retimer has the smallest market but the highest profit margin of 76%+. The closer a layer is to computing power, the scarcer and more profitable it is.

2) Incremental profit pools mainly come from three areas: HBM (CAGR 28%), enterprise SSD (CAGR 24%), CXL pooling (CAGR 37%).

3) Each layer has different business barriers: HBM relies on technology barriers, TSV, CoWoS, and yield ramp-up; CXL relies on IP and certifications, with retimers having a single supply chain; while service layers rely on switching costs.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by PANews

58 minutes ago
Why Kevin Warsh is the "chosen one" to be the Chairman of the Federal Reserve? The next six months of policy are crucial.
5 hours ago
Wall Street is fully betting on RWA: BlackRock, Franklin, and Morgan Stanley are moving the financial markets on-chain.
7 hours ago
The Bitcoin privacy solution driven by ZK has officially launched on Starknet.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar深潮TechFlow
13 minutes ago
Saying "buying BTC is not as good as buying the Nasdaq" has an expiration date.
avatar
avatarForesight News
17 minutes ago
In-depth interpretation of the true significance of Kevin Walsh being confirmed as the Chair of the Federal Reserve for the market.
avatar
avatar深潮TechFlow
25 minutes ago
Kevin Warsh takes office as the Chairman of the Federal Reserve, the first central bank head to hold Bitcoin assets.
avatar
avatarForesight News
42 minutes ago
Who will end the AI bull market, positions or narratives?
avatar
avatar深潮TechFlow
53 minutes ago
Bezos, Schmidt, Powell, Jobs: Three AI Investment Philosophies of Silicon Valley Old Money
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink