In an interview in early May 2026, Jensen Huang made a judgment regarding the next generation of computing power landscape: NVIDIA has reached a new collaboration with optical materials giant Corning to introduce large-scale advanced optical interconnect solutions for AI infrastructure and expand related manufacturing capabilities in the United States. He did not delve into the familiar narrative of “how strong the computing power is,” but directly pointed out the real pain point of the current AI infrastructure—between increasingly large GPU clusters, copper wires can no longer support the bandwidth and performance demands. Signal attenuation and heat generation in high-bandwidth scenarios have accumulated into an unavoidable system bottleneck. In contrast, he views fiber optics and optical interconnects as a new “vascular network” that should replace or significantly supplement traditional copper wires on an unprecedented scale, realigning the interconnection capabilities of data centers with chip computing power. More critically, this is not a simple component upgrade, but rather NVIDIA's partnership with Corning to push optical interconnects from communication networks to the core of AI infrastructure, binding it to domestic manufacturing in the U.S.—pointing in three directions: AI infrastructure moving from copper wires to fiber optics, from single-point chips to full-stack system optimization, and embedding this technological route choice into the long-term game of rebuilding the technology supply chain in the U.S.
Copper Wires Can't Keep Up: Bandwidth Crisis Under AI Computing Explosion
In the past few years, large models have grown like a snowball, with NVIDIA stacking one generation of GPUs after another into data centers. Once single machine performance increased, the problem shifted to “between machines.” A large model training often involves thousands to tens of thousands of GPUs forming a cluster, continuously doing one thing: after completing a round, they synchronize gradients and parameters across the entire network. The result is that the real “busiest” elements in the server room are no longer the chips, but the cables and switching networks that connect these chips—traffic is increasing exponentially, while those seemingly ordinary copper wires between racks are increasingly unable to keep pace.
In the words of engineers, copper cables are cheap and easy to use at low speeds and short distances, but once the speed increases and distances lengthen, all physical problems come knocking: the faster signals run over copper wires, the more severe the attenuation becomes, and available bandwidth is forced to be discounted; to barely maintain signal quality, the transmitting end has to pile on more power, while the receiving and transmitting chips require more complex compensation and error correction, leading to a steep rise in energy consumption and heat generation along the entire link. For AI clusters, this means you clearly bought a whole bunch of high-end GPUs, yet are “choked” by a few wires: network ports cannot provide sufficient bandwidth, power in the server room is consumed largely by the interconnect system, and cooling urgently becomes a problem at the wiring points between cabinets.
When actions like gradient synchronization and parameter exchange become the protagonists of the training process, the shortcoming in overall system performance shifts from computing power chips to the interconnect network itself. Huang publicly stated that “traditional copper wires can no longer meet the demand,” which is not about an isolated product upgrade, but indicates that the entire AI infrastructure has reached a crossroads: if a higher bandwidth, higher energy-efficient connectivity method is not adopted, merely continuing to pile on GPUs will firmly hold back the training and inference of the next generation of large models.
From GPU to Fiber Optics: NVIDIA Bets on Next-Generation Interconnects
NVIDIA is no longer just a company that sells a single GPU chip; from complete server systems to network cards and software stacks, it acts as a general contractor for a whole set of “computing power factories” in AI infrastructure, and how fast this entire system can run depends precisely on whether the interconnect architecture can withstand the communication pressure among massive GPUs. In his recent statements, Huang made it clear: the next generation of artificial intelligence infrastructure “will require a large amount of optical connections,” and traditional copper wires can no longer meet bandwidth and performance demands. This is not about adding a thicker cable to existing products, but acknowledging that if the interconnect does not change course, from GPUs to racks to data centers, the performance ceiling at the entire system level has already been locked by copper wires.
In this regard, NVIDIA's collaboration with Corning has been placed in a strategic transformation position. The industry widely regards fiber optics and optical interconnects as having superior advantages in bandwidth, transmission distance, and potential energy efficiency, capable of alleviating the bandwidth limitations, signal attenuation, and heat generation issues faced by copper wires in high-speed scenarios, directly addressing the pain points of large-scale AI clusters. Huang emphasized that NVIDIA will expand the application of optical technology in AI infrastructure on an unprecedented scale, not just intermittently replacing a few segments of wire, but embedding the “optical interconnect solution” developed with Corning into the entire system architecture, using fiber optics to rewrite the connection methods among GPUs, between servers, and even between data centers—it is here that NVIDIA truly bets on the next battlefield for interconnects.
Corning Enters the Scene: A Key Piece of the U.S. Optical Supply Chain
To turn “using light to connect AI” from a concept into a data center of concrete and steel, one must first answer a simple question: who will make the light? Corning is almost the standard answer to this question. This company has long been deeply involved in fiber optics and optical materials, with a complete technological lineage and large-scale manufacturing experience from optical telecommunications networks to related material processes. It knows how to turn what appears to be an ordinary fiber core into infrastructure capable of supporting global communication trunks and understands better how to maintain consistency and reliability in mass production. For NVIDIA, betting on optical interconnects means marrying the connection needs between GPUs to a "pipeline of light" that has already been repeatedly validated in the telecommunications industry and can be mass-produced and replicated.
Research briefs indicate that the goal of this cooperation is no longer merely to create a few types of optical fiber products for traditional telecommunications networks, but to fully integrate optical interconnect solutions into AI infrastructure, allowing fiber optics to directly address the traffic surges of GPU clusters. Huang, when discussing this collaboration, did not stop at the level of technical cooperation but repeatedly emphasized the need to “expand related manufacturing capabilities in the U.S.,” directly binding Corning’s production layout to domestic U.S. interests. He sees this as a part of “an important opportunity brought by the rebuilding of the technology supply chain in the U.S.”: on one side, NVIDIA attempts to rewrite the AI interconnect architecture, and on the other, Corning expands optical component production in the U.S.—the overlap of these two creates a supply chain path from algorithms, chips to fiber optics and factory addresses all pointing to the U.S., and this is exactly why this cooperation has been amplified to discussions at the industrial and national levels.
Policies and Capital Synchronize: The U.S. Technology Supply Chain Reconfigures
Looking at NVIDIA and Corning's deal through the policy timeline, one finds that it is not an isolated commercial cooperation, but more like a highlight in the narrative of America’s “technological manufacturing rebirth.” In recent years, the U.S. has guided semiconductor and key technology manufacturing back to the homeland through legislation and subsidies, with “rebuilding the technology supply chain” repeatedly written into official statements. Huang deliberately ties optical interconnects to domestic production capabilities, describing the “expansion of related manufacturing capabilities in the U.S.” as an opportunity “to bring significant opportunities for rebuilding the technology supply chain in the U.S.,” embedding NVIDIA's interconnect roadmap and Corning’s expansion plans directly into this critical narrative of national supply chain security: AI computing power must be in the U.S., and the fiber optics and components that connect this computing power must also be in the U.S.
From the perspective of capital logic, this is a synchronous resonance of policy guidance and corporate capital expenditures. As a leader in AI infrastructure, NVIDIA could have sought the lowest-cost optical supply chains globally, yet chooses to expand optical interconnect capacity with Corning, which has deep manufacturing capabilities in the field of optical communications, in the U.S. This sends a signal to the market to “align with policy”: the key components of the next-generation AI infrastructure will prioritize landing in domestically friendly policies. Research briefs indicate that this combination of strategies benefits not only the two companies but may also serve as a demonstration template—other GPU manufacturers, server, and network equipment manufacturers will be forced to rethink their choices concerning optical interconnects and production bases, while upstream materials companies will ponder whether to follow customers into the U.S. market, and global supply chains will be restructured around “who can supply in the U.S. and who can access this new chain.”
The Era of Optical Interconnects Begins: The Next Stop for AI Infrastructure
From “copper wires can’t hold up” to “betting on fiber optics,” and finally to “binding production capacity in the U.S.,” NVIDIA and Corning have laid out a clear route: first acknowledging that copper wires have become a structural bottleneck in the face of high-bandwidth clusters, then rewriting the interconnect layer using fiber optics and optical materials, and finally locking in this new interconnect capability within a domestic supply chain. Notably, the signals released by this cooperation focus on technological direction, partner selection, and statements on the “unprecedented scale” of expanding optical applications; specific monetary amounts and production capacity figures have been deliberately left vague, indicating that both parties prioritize framing the narrative of the era before gradually filling in the details. NVIDIA elevating optical interconnects to a central theme of next-generation AI infrastructure amounts to a declaration: future data centers will not just be stacked around individual computing chips, but will require redesign and optimization around optical connections, prompting servers, network devices, and even the overall architecture to adapt to this new mainstream. The penetration path is likely to start with the most bandwidth-hungry cluster interconnects and rack interconnections within data centers, then spread to more levels, but along the way, hurdles in packaging, reliability, cost curves, and scalable manufacturing must be crossed; fiber optics are not a magic bullet that can “replace” copper wires overnight. For the supply chain and investors, the implication is equally direct: the combination of NVIDIA and Corning illustrates that true competition has already extended from the computing chips themselves to interconnect technology and supply chain control, and the future winners will not just be determined by who has the stronger GPU, but by who can establish a closed loop that is harder to replace along the chain of “computing power + optical interconnect + domestic manufacturing.”
Join our community, let’s discuss and become stronger together!
Official Telegram community: https://t.me/aicoincn
AiCoin Chinese Twitter: https://x.com/AiCoinzh
OKX Benefits Group: https://aicoin.com/link/chat?cid=l61eM4owQ
Binance Benefits Group: https://aicoin.com/link/chat?cid=ynr7d1P6Z
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。



