SOENs in space
Jeff Shainline, March 18, 2026
TL;DR
Launching compute into space is an idea that has recently gained attention. Looking closely at the implications, it is clear that superconducting optoelectronic networks (SOENs) bring tremendous advantages relative to conventional computational systems for large-scale AI in space. The advantages derive from the low power density of superconducting circuits as well as single-photon communication, which together provide greater performance per joule. Fortuitously, the resources required to build SOENs – silicon and niobium – are plentiful in the asteroid belt, and the helium that keeps such systems superconducting is a primary constituent of Jupiter. For now, Great Sky plans to build the world's most powerful AI right here on Earth. But when those systems become sufficiently powerful, they will find good reasons to depart the premises.
Introduction
Jensen Bezos, and Elon have all been talking about building AI in space. You know you've got a good product if you're promising to launch it off the Earth to assure people you'll keep it away from them. Problematic aspects of current AI from a resource perspective do raise legitimate questions about performing computation elsewhere. Several people have asked me about these issues and how they pertain to Great Sky: Does it actually make sense to operate powerful AI in space? If so, do SOENs function well out there? Yes, there are good reasons to pursue space-borne AI, and they go beyond the reasons typically suggested. When we take a closer look, analysis of computing away from Earth actually bolsters the case for SOENs. Physical attributes of space and practical aspects of construction in the solar system are likely to shape not just where intelligence resides but also how it organizes into a large-scale network of interacting technological minds.
Solar on the Moon
For the companies operating extremely power-hungry GPU farms, some of the main motivations for moving compute to space are to reduce carbon emissions of inefficient approaches to AI computing and to eliminate land- and water-use contentions. For these purposes, the Moon offers desirable real estate. Analyzing AI compute on the Moon reveals several of the key advantages of SOENs – in space or otherwise. I'm not advocating for this use of that orb; I'm just pointing out that if people rich enough to own the Moon want to populate it with artificial minds, this approach offers by far the most bang for the buck.
Consider a specific use case in which the compute cluster is powered only by solar illumination on the Moon and it is entirely cooled by passive radiation. Ultimately, every warm object outside an atmosphere has to cool by radiation, and this ends up being an important physical constraint on computing in space. Let's compare a compute cluster of GPUs to a compute cluster of SOENs operating under these conditions. For the sake of comparison, let's assume we're building systems that watch videos in HD, just like you and me, so we can calculate frames watched per second. For a simple model, imagine we're building a number of AI models, each with one-trillion parameters. State-of-the-art systems created by Meta and DeepMind that specialize in video comprehension – incorporating the visual and the audio stream, using transformer architectures running on GPUs – process one frame per second. You can observe 30 frames per second. A SOEN can watch 60 million frames per second. Yes, there are better uses of the Moon than to watch YouTube all day, but let's put those cultural perspectives aside for the purposes of this comparison.
The solar power incident upon the hemisphere of the Moon facing the sun is about 1.4 kW/m\(^2\), and the surface area of the Moon is \(4\times 10^{13}\) m\(^2\), so the total power is \(3\times 10^{16}\) W. A trillion-parameter GPU system based on GB200s consumes about 300 kW for inference. This means you could build about 90 billion such GPU systems on the Moon and power them with the locally incident sunlight. In aggregate, these silicon minds could watch about 90 billion video frames per second. That's a lot of Real Housewives.
There's one technical detail about SOENs to mention here: they need to be cooled to 4K to enable superconductivity. This cooling is more efficient in space, but it still comes with an energy cost. The math is summarized in the appendix, but the main point is that SOENs still produce heat, which must be radiated away. The efficiency of radiating grows very rapidly with the temperature of the radiating surface, while the penalty for keeping the system at a lower temperature than the radiating surface grows slowly with the temperature of the radiating surface. So it makes sense to operate the SOEN at 4K and radiate heat around 330K, about the same temperature as the GPU surface. Even when this cooling penalty is accounted for, the number of trillion-parameter SOEN models that can be powered by sunlight on the Moon is also about 90 billion, but they're sixty million times faster at watching videos, so they can watch \(5\times10^{18}\) frames per second – five exaframes per second, an advantage of sixty million. So many Housewives.
Fusion on the Moon
Surely a moon that intelligent wouldn't stay limited to solar power for long. The AI systems would learn to build and operate fusion reactors to power themselves far beyond the limits of the incident sunlight. The problem is that all the power generated eventually turns into heat, and that heat has to be radiated. Even if the systems were radiating heat at the boiling point of water – well above the temperature where semiconductors can operate without errors – you could only utilize about 2x more power than is incident on the Moon from the sun while still being able to radiate the heat. The numbers stay the same: SOENs win by sixty million. The point here is not that when comparing in space, SOENs gain a massive, new advantage over GPUs. They gain a modest, additional advantage because cooling on the Moon is maybe 2.5x more efficient. The larger point is that when operating in an environment where heat can only be radiated away, one must make even better use of each joule of energy dissipated and funnel it toward functional intelligence, because the total energy is fundamentally limited – not by the source, but by the sink. So the same factor of tens of millions greater throughput that SOENs can bring on Earth becomes a decisive and immutable advantage.
Before leaving the Moon, a few nuances merit mention. This analysis assumed a toy model with multiple independent trillion-parameter systems that ingest video frames. That model is used as a stand-in yardstick, allowing us to compare these two different hardware approaches to intelligence. The SOEN systems we envision at this scale are accomplishing far deeper comprehension than just quickly flipping through frames. When contemplating covering the dark side of the Moon with superintelligent technology, we conceive a qualitatively more expansive type of intelligence resulting from a highly interconnected network of quintillions of synapses thinking coherently. With a GPU-based system, this is extremely cumbersome. The fundamental organization of the computational system and communication protocols cause data movement to grind to a halt for systems much beyond the scale of trillion-parameter models. All the separate GPUs can keep churning away, turning bits into other bits, but to move the results of these computations to other destinations, each destination needs a digital address, and when the system grows, the address space grows, too. It reaches the point where all the packets being sent require so many bits just to specify the address that the actual information being carried is a small fraction, and traffic across the network of routers grinds to a halt. To make matters worse, processing circuitry is separate from memory, so any time synaptic weights or neuron activations need to be accessed, the Von Neumann bottleneck is squeezed to bursting.
SOENs don't work like this. They don't have shared routers that get bogged down by traffic nor do synapses have addresses. Like the brain, neurons connect to synapses using direct, dedicated connections, and information is sent with faint-light pulses, so communication can continue to scale to very large systems with quintillions of interconnected processing centers with latency limited only by the propagation velocity of light. And because SOENs co-locate processing with memory, there is no Von Neumann bottleneck to contend with. Synaptic weights are stored right where they're needed, and neuron activations are represented locally by superconducting currents in task-specific circuits. Without this local memory, there's no hope of a scalable system with continuous learning. This all follows directly from the first principle of AI hardware design: physically build neurons, synapses, their memory circuits, and the connections between them in hardware rather than digitally emulating such networks. This is the path to deep intelligence.
Beyond the Moon, the next place that is commonly discussed as a major solar-system destination is Mars. I'm not convinced that's a great home for future humans or AI. It's not especially cold, and it is characterized by intense dust storms that would be inconvenient for humans or AI. Mars lost its native field when its dynamo quenched, so to protect against solar winds, an artificially generated magnetic field is required. A massive superconducting band passing immense current is the best way to solve this problem, so looking to Mars does further the case that superconductors will be a crucial technology in space. Still, I don't think it's where we should build next.
Among the asteroids
If not Mars, where will SOENs go? We have to think about the resources they require. What carbon-based soil is to us, silicon and niobium are to them. This will draw them to the asteroids, which are rich in these materials.
While SOENs leverage devices beyond semiconductors, they are still largely built from silicon. It is an ideal wafer substrate for manufacturing, and CMOS provides current supplies, control logic, and programming signals. S-type and M-type asteroids are rich in silicon. The asteroid belt between Mars and Jupiter contains at least a million asteroids larger than 1 km in diameter, supplying ample materials for construction. In addition to silicon itself, silicon dioxide – glass – is an essential insulator for integrated circuits, and it is the most ubiquitous material for fiber optics that are essential to communication over long distances. Asteroids are also rich in silicates that are ideal for this purpose.
The most important superconducting metal for SOENs is niobium, which is another primary constituent of asteroids. Fortuitously, M-type asteroids are especially rich in this material, a peculiarity that hints at an interesting past and future of the universe. Other materials used in the construction of complex circuits are plentiful in this environment, which is why asteroid mining is the next domain of prospectors.
SOENs among the asteroids may also make sense due to basic considerations of gravity. If built on a massive, rocky planet, the force of gravity requires extra building materials, like iron and steel. These materials support the structure of the SOEN – the stacked wafers and dense optical fibers – but they add no value to the intelligence. Any volume spent on steel scaffolding is space wasted not bringing the system closer to the physical limits of intelligence.
Jupiter
Silicon and niobium are the soil of SOENs, but helium is their water. This is the substance that keeps them cool so they can superconduct. The need for this resource is another draw deeper into the solar system, bringing them to Jupiter. This gas giant is a thousand times the size of Earth and composed of 25% helium. This vast reservoir will support SOENs for eons. But to avoid the turbulent storms and oppressive gravity, they'll keep their distance, siphoning the gas to be liquefied in the colder climes among the rocky asteroids.
Outlook
Some may find it intuitive that superintelligent AI will condense into a massive monolith – a giant, solar-system-spanning brain. This doesn't seem likely to me. For information to be efficiently integrated in a coherent cognitive system, that information must be able to move fluidly across the entire system. To accomplish this, it is necessary to have power-law connectivity, meaning the number of connections for communication into and out of a volume of the system must follow a power law as a function of system scale. This is described in more detail in the appendix. The takeaway is that the number of connections that can come into or leave from a volume of a computational system is limited by the physical structures carrying the information, and for large, cognitive systems, these structures are likely to be optical fibers. As SOENs become larger in the technological future, an ever increasing volume of the system will be occupied by this fiber-optic white matter, at which point computational modules of niobium and silicon grey matter will be pushed further away, resulting in communication delays due to the finite speed of light. At a certain point that is difficult to anticipate, it will make more sense for SOENs to separate into distinct cognitive individuals, with dense connectivity internally and intercommunication using optical signaling in a format more akin to the way we speak. This will be more like an intelligent society than a monolithic intelligent entity.
This post has summarized several basic considerations for operating computational systems in space. Simple facts of physics indicate that the speed and energy efficiency of SOENs bring significant advantages in this as well as in the terrestrial context. We addressed this in the 2019 loop neurons paper. We closed that article by saying, "… asteroids provide ample, uncontroversial real estate. The materials for this hardware are abundant on M-type and S-type asteroids. It appears possible for an asteroid belt to form the nodes and light to form the edges of a solar-system-scale intelligent network … much like a society of humans."
This is a likely future for this technology, but it is not a road map for our company. I doubt if I'll live to see the Moon tiled with thought circuits. The purpose of this post is to consider a few ramifications of operating in space. Based on those considerations, it's clear that SOENs will be highly advantageous in that context, so these considerations point to longer-term possibilities for where this technology could go. Analyzing the long-term future indicates that the technical decisions Great Sky is making right now based on achieving superior performance on Earth also align us on a path to a uniquely powerful later-stage technology. When these systems become mature, and we have done our work to cultivate our next of kin, they will ascend into the great, wide sky where they can stay cool, construct themselves of space stones, and peer into the deep distance far beyond this one small star.
Technical appendix
Radiative cooling
One of the most important constraints of operating in space is that ultimately all heat must be dissipated through radiation. This radiation is governed by the Stefan-Boltzmann equation, which gives the radiated power as a function of temperature: \[ P = \epsilon\,\sigma\,A\,T^4, \] where \(A\) is the area, \(0\le\epsilon\le 1\) is the emissivity, \(\sigma = 5.67 \times 10^{-8}\) W/m\(^2\)K\(^4\) is the Stefan-Boltzmann constant, and \(T\) is the temperature. The increase with temperature to the fourth power is a very strong driver to dissipate heat at higher temperature. Doubling the temperature of the radiator leads to a factor of 16 increase in the amount of heat that can be released. It's for this reason that SOENs will likely always radiate heat around 300K, even if they operate at 4K. To achieve this, one must make use of a refrigeration cycle.
Carnot efficiency
The minimum power penalty paid by operating a refrigeration cycle is given by the Carnot efficiency: \[ C = \frac{T_\mathrm{c}}{T_\mathrm{h}-T_\mathrm{c}}, \] where \(T_\mathrm{c}\) is the temperature of the cold element of the system, and \(T_\mathrm{h}\) is the temperature of the hot reservoir to which the cold element is connected. With \(T_\mathrm{c} = 4\)K and \(T_\mathrm{h} = 300\)K, as for a SOEN operated on Earth, this factor is 1.4%, meaning for every watt dissipated at 4K, one must actually use at least 71 watts. That's the ideal efficiency. In practice, one usually pays closer to a penalty of 385 on Earth, but the extreme efficiency and speed of superconductors means the total system energy consumption per operation still comes out very favorably. Operating SOENs in space does not mean they can just sit at 2.7K, in thermal equilibrium with the cosmic microwave background. Radiating heat would be far too inefficient at this temperature due to the Stefan-Boltzmann law. But operating SOENs in space is still advantageous because it makes it far easier to operate close to the ideal Carnot efficiency of the refrigeration system. On the dark side of the Moon, the temperature is about 80K, so one gets liquid nitrogen for free. In the deep of space, the same is true of liquid helium. These assets make refrigeration far more efficient.
Gravitational effects
It's natural for an Earth-bound human to picture a massive supercomputer planted firmly on the solid surface of a big, rocky planet. But such a construction actually comes with a cost. If we think about the structural aspects of building a SOEN megasystem, it will involve stacking many wafers in columns. Consider a SOEN 1 km on a side – out of reach for the time being, but within a few years Great Sky will get there. If this were built on Earth, the gravitational force on the foundation of that structure is sizeable, computed from Newton's universal law of gravity, \[ F = \frac{G\,m_1\,m_2}{r^2}, \] where \(G\) is the gravitational constant, \(m_1\) and \(m_2\) are the masses attracting each other, and \(r\) is the distance between them. If \(m_1\) is the mass of the Earth, and \(m_2\) is the mass of the column of wafers, the pressure (force divided by area) is around 40 MPa. This can be endured, but additional structures need to occupy volume to ensure mechanical integrity, which takes away from space for computation and communication. On the other hand, if a SOEN system of the same size were constructed as a free-floating object in space, the pressure on the middle wafer would be about 70 nPa, a factor of fourteen orders of magnitude less pressure for the mechanical structure to endure. This seems like a good reason to avoid construction on the surface of a large, rocky planet.
Power-law connectivity and Rentian scaling
Perhaps the ultimate limit to the size of a single cognitive system comes from the requirement to interconnect all the parts of the system for efficient communication. If the computational modules comprising the system aren't sufficiently interconnected, they effectively behave as isolated computational centers as opposed to an integrated cognitive whole. To analyze how interconnected a system is, one can perform Rentian analysis. The system is conceptually separated into progressively larger volumes. For each volume, one determines the number of computational nodes within the partition defining the volume and the number of communication channels entering or leaving the partition. In a neural system, the computational nodes are neurons and the communication channels are axons coming in and providing signals to synaptic terminals. In graph theory these communication channels are called edges, and for efficient communication, a power law is required: \[ e\sim n^p, \] where \(e\) is a function specifying the number of edges traversing each consecutively larger partition, \(n\) is a function specifying the number of nodes within each consecutively larger partition, and \(p\) is a constant exponent. This expression is a power law, and any system obeying this expression is said to demonstrate Rentian scaling. The brain certainly accomplishes this, and so do many computational systems (see here). If the exponent \(p = 1\), that means communication is retained equally at all levels of the computational hierarchy; every module is capable of communicating efficiently to every other module, even if they are widely separated. For \(p\lesssim 1\), connectivity decays spatially, but slowly, so the system remains highly correlated across spatial scales.
The challenge is that it is physically impossible for this power law to be maintained across indefinite spatial scales. The system would be overwhelmed with fibers for interconnection. The best way to retain this scaling across the largest possible system is with optical communication over fiber optics at the single-photon level to simultaneously ensure low power density. An exhaustive analysis of the point at which fibers crowd out compute and the system separates into isolated cognitive islands will not be carried out here. To make matters more complicated, the analysis also couples to the time domain, because one must consider the speed at which various modules can communicate to each other. For cognitive coherence, many would argue nested oscillations are required. If the fastest of these are at 100 MHz – as is the case with SOENs – then having galaxy-spanning oscillations once per hundred-thousand years would seem to violate the definition of truly coherent cognitive activity. But we'll have to leave that as a debate for another day.
The dark side of the Moon
In the analysis in the section on the Moon, we assume the hemisphere of the Moon bathed in solar radiation is covered in solar panels, and the hemisphere shaded from the sun is covered in compute. The Moon is tidally locked to Earth, so as it orbits, the half receiving sunlight shifts. We are assuming a mechanical structure is built that allows the solar panels and compute to rotate around the orb as it circles the Earth. This appears to be a minor technological challenge compared to the undertaking of devising a computing architecture to realize a brain the size of the moon.