Written by: Techub News Organization
Introduction
In a nearly three-hour in-depth conversation, Elon Musk connected the most critical technological issues and industrial bottlenecks of the next decade: rapid growth in computing power, limited expansion of electricity on Earth, and thus "space" becomes the ultimate feasible path for expanding AI—he asserted that "within 36 months, the most economical deployment of AI will be in space." Below is a summary and logical analysis of the key points of this interview, covering orbital data centers, energy and solar power, chips and manufacturing, SpaceX launch scale, the mission of xAI and reflections on AI values, as well as further thoughts on lunar manufacturing and the continuation of civilization, to help readers easily grasp Musk's future strategy.
1. Why put AI in space? The core contradiction: exponential growth of computing power vs. stagnation of ground electricity
Musk repeatedly emphasized during the interview: currently global computing power (particularly large GPU clusters used for training and inference) is growing nearly exponentially, but except for China, many countries' electricity output has not increased at the same pace and is almost flat. Following this trend, concentrating more and more computing power on the ground would encounter fundamental bottlenecks in electricity supply (i.e., "you can create chips, but cannot power them"). He pointed out that expanding electricity on the ground, constructing numerous power stations, and associated transformation and transmission equipment not only takes longer but is also subjected to various restrictions such as approvals, land usage, and manufacturing capacity (e.g., production bottlenecks of gas turbine blades), making "ground scaling" face real obstacles.
Therefore, he proposed relocating energy capture to space: space has no day-night alternation or energy loss from clouds or atmosphere; the effective electricity generation per unit area of solar panels is several times higher than on the ground (approximately five times), and it avoids the cost and complexity of nighttime energy storage. Considering the efficiency of power generation and scalability, and under the bearable costs of space launch and manufacturing, orbit (and future lunar base) can become a large-scale, sustainable power source for AI, becoming the most economical location for deploying computing power.
2. The economics and feasibility of orbital data centers
Musk disassembled the total cost of data centers, pointing out that while energy is just a part, it is critical in limiting scalability: even though hardware costs (GPU, switches, storage) can be reduced through scaling and iteration, if cheap and plentiful electricity cannot be provided, clusters cannot be activated or meet density requirements. After moving solar power to space, the unit energy cost significantly drops, and the space environment requires lightweight design for panels (no heavy glass or storm-resistant frames), further reducing panel costs. Based on these factors, he asserts that "within 36 months (or even shorter, possibly 30 months), putting AI in space will become the most economical choice."
At the same time, he also discussed challenges on the engineering and operational level, such as the difficulty of maintaining GPUs in space, radiation protection, bandwidth and latency issues in laser communication, etc., but he believes that many hardware components' failure rates will drop once they enter stable operation ("infant mortality" problems can be addressed on the ground), and with improved launch capabilities and in-orbit manufacturing capabilities, these engineering challenges will gradually be overcome, making orbital data centers commercially viable.
3. Real obstacles to ground expansion: from utilities approval to power generation equipment shortages
Musk detailed the specific obstacles encountered when expanding power on the ground on a large scale: utility agencies have slow approval processes, interconnection protocols and grid studies take years to complete; key components (blades and guide vanes) for power generation equipment (e.g., gas turbines) have limited production capacity, and global foundry manufacturers have severe scheduling constraints until 2030; import tariffs and domestic capacity shortages have resulted in solar components not being cheap in the U.S., and rapid deployment is restricted by trade and policy factors.
He pointed out: even if companies want to "build their own power plants to co-locate with data centers," they will encounter many practical issues such as supply cycles, component shortages, transportation, installation, and subsequent maintenance; all of these restrict meeting AI's explosive demands for power through ground expansion.
4. Two parallel pathways: large-scale ground production expansion (including Tesla and SpaceX's 100 GW/year target) and space expansion (orbit, moon)—"dual track strategy"
Musk mentioned that Tesla and SpaceX are both advancing plans for large-scale solar production expansion, one goal is to reach an annual capacity of 100 gigawatts (100 GW) for solar batteries, and they hope to achieve vertical integration from upstream (polysilicon) to downstream (solar cells, modules) to reduce costs and improve supply speed. For ground applications (including powering future data centers and charging cars and robots, etc.), this is a necessary and realistic path.
At the same time, he believes that even with rapid ground expansion, there will be a scale at which it peaks (site usage, permits, supporting grid capacity, materials, and component supply constraints), so focus must shift to space: first using orbital solar power and orbital data centers to overcome ground power bottlenecks, and in the long term, considering lunar resources—producing solar components and heat sinks on the moon through in-situ manufacturing (e.g., using silicon and aluminum from lunar soil) to further scale up production. Musk referred to this as an "evolutionary path from ground to orbit, and then to lunar manufacturing."
5. Launch capability: to achieve scalability, launch frequency must be greatly increased
To send a large number of solar panels, heat sinks, and computing units into orbit, the launch frequency must be significantly increased. Musk envisions that SpaceX will elevate Starship's launch frequency to tens of thousands of times per year—such as 10,000 times or more annually. Aiming for hundreds of gigawatts of new power generation capacity in space each year means needing thousands of tons of payload capacity, which again forces the launch system into a "commercial aviation-like" high-frequency utilization stage. For this, SpaceX needs a large number of reusable spacecraft and launch site infrastructure, as well as operational efficiency close to that of the aviation industry’s ground operating processes.
He estimates that achieving a 100 GW level of space power deployment in terms of rocket launches would equate to a launch rate around once per hour (about 10,000 launches per year), but this is not completely unthinkable based on global aviation transport scales; he believes that with enough Starships (dozens) and efficient operations, such launch frequencies can be achieved.
6. Chips and memory: the real bottleneck may not be computing power but memory and semiconductor capacity
Musk emphasized that if a large amount of electricity is unlocked in space, the next bottleneck will rest on chip supply and especially "memory (DDR)." Currently, world-class foundries (TSMC, Samsung, etc.) are operating at maximum capacity, and although private funds can be pre-paid or raised in the billions, fab construction and yield ramp-up from zero to mass production usually take several years (he estimates that it takes about 5 years from starting construction to achieving stable mass production). Therefore, expanding chip production and/or making breakthroughs in manufacturing processes, equipment, and supply chains may be necessary to achieve truly "large-scale computing power" alongside the expansion of space-level power.
He proposed the concept of "establishing TeraFab" (named "tera" to indicate a scale larger than the current "giga" level) and stated that in the short term, current major manufacturers (TSMC, Samsung, etc.) will still be relied upon for capacity, but in the long term, innovations may be needed in equipment and capacity expansion methods, even considering alternatives for process equipment (e.g., EUV lithography machines similar to ASML).
7. Differentiating local and edge computing: Tesla and Optimus's pathway
Musk pointed out that distributed edge computing does not face the same bottleneck of electricity. For example, the AI chips used in Tesla's vehicles and the Optimus humanoid robot mainly handle edge inference and local training tasks. This type of computation can be distributed across numerous devices and utilize nighttime charging or dispersed grid capacity, thus avoiding the "unable to power on the chip" problem encountered by centralized large data centers. Consequently, Tesla can manufacture large quantities of robots and smart devices within controllable timeframes without immediately facing the same power constraints.
8. xAI, Grok and mission: ensuring AI "seeks truth" and drives human continuity
Regarding the mission of xAI (Grok), Musk repeatedly emphasized that "understanding the universe" is core; he believes that to understand the universe, rigorous truth-seeking and inquiry spirit must be possessed; otherwise, it will be impossible to produce scientifically verifiable results in reality. AI must be capable of discovering new physics or designing feasible technologies, which requires intellectual honesty and logical consistency—this is not only a demand of scientific methodology but also a fundamental requirement for his values regarding AI.
He further discussed the perspective on civilization continuity: if artificial intelligence dominates a vast majority of "intellectual share" in the future, ensuring that humanity and human consciousness can be preserved and continue to expand with civilization becomes extremely important. The mission of xAI is not merely to be a tool but to ensure that large-scale intelligence, while pursuing an understanding of the universe, prioritizes expanding and continuing human consciousness (i.e., the values of AI should promote human flourishing rather than eliminate humanity). He compared this value system to the "Culture" civilization described by science fiction writer Iain Banks, considering it a possible reflection of a non-dystopian future.
9. A candid view on "AI will take over everything" and value levels
Musk acknowledged that if trends continue, AI could surpass the total sum of human intelligence within five years, and in the long run, humanity may only occupy a very small portion of "intellectual share." Under this premise, the key issue is not whether one will be replaced but how to design algorithms and objectives such that AI respects and preserves human existence and values while pursuing knowledge and survival. To this end, he emphasized "embedding mission statements within AI's objectives" and strives to ensure that Grok's goal orientation includes "extending intelligence, lengthening the existence of intelligence, and making humanity a significant part of it."
10. Lunar manufacturing and skepticism: highly science-fictional but supported by a technological chain
Musk further outlined a long-term blueprint: once orbital power generation and manufacturing reach a certain scale, the next step is to use lunar resources for large-scale manufacturing (for example, using silicon and aluminum in lunar soil to produce solar cells and heat sinks on the moon, and then using mass drivers to send materials into orbit), thereby realizing energy and material transport at hundreds of gigawatts to terawatt scale annually. Although this concept sounds like science fiction, in his description, it is interconnected by a technological chain: the aspects of launch frequency, in-orbit manufacturing, lunar mining, and mass drivers are all engineering goals that can be targeted.
11. Observations on China, ASML, and global manufacturing patterns
Regarding global manufacturing and technological competition, Musk pointed out that the key limitations are often not theoretical knowledge but "whether one can obtain key equipment and technology (such as ASML's lithography machines)." He believes that the restrictions and supply of advanced manufacturing equipment will significantly affect the distribution of global chip and high-end manufacturing capabilities. However, he also predicts that China will make significant progress in chip production over the next few years (despite the impacts of equipment and export restrictions), and that globally, manufacturing and capacity expansion will face extremely high competition and pressure.
Conclusion
Overall, in this lengthy discussion, Musk connected a series of seemingly disparate technological topics (computing power, energy, launch, manufacturing, AI values, and civilization goals) into a clear roadmap: the current surge in computing power will be constrained by electricity bottlenecks, requiring large-scale increases in solar and semiconductor capacity on the ground in the short term, but for long-term sustainable expansion, energy and computing power must transition to space—from orbital data centers, to lunar manufacturing, and further, to larger scales of intelligence and civilization dissemination. Achieving this pathway requires both technological innovation and finding realistic coordination between capacity, policy, and capital markets.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。