Here's some thoughts trying to steelman this idea.<p>Firstly, let's note that the founding team is certainly not inexperienced. One of them has worked at both SpaceX and Microsoft on the datacenter side, one claims to have 10 years of experience designing satellites at AirBus and he has a PhD in materials science. And the CEO has a business background mostly but also worked on US national security satellite projects (albeit, at McKinsey).<p>They make a big deal of it being about AI training but it would seem like inference is a much better target to aim for. Training clusters are much harder than inference to build, the hardware obsoletes quicker, they have much higher inter-node connectivity, you need ultra-high bandwidth access to massive datasets and you therefore need a large cluster before anything is useful at all. Inference is a much easier problem. Nodes can work independently, the bandwidth needs are minimal and latency hardly matters either as there is an ever increasing number of customers who use inference in batch jobs. Offloading inference to the sky means the GPUs and power loads that <i>do</i> remain on earth can be fully dedicated to training instead of serving, which dodges power constraints on land for a while longer (potentially for as long as needed, as we don't know training will continue to scale up its compute needs whereas we can be fairly confident inference will).<p>If you target inference then instead of needing square kilometers of solar and radiator you can get away with a constellation of much smaller craft that scale up horizontally instead of vertically. Component failures are also handled easily, just like for any other satellite. Just stop sending requests to those units and deorbit them when enough internal components have failed. Most GPU failures in the datacenter are silicon or transceiver failures due to high levels of thermal cycling anyway, and if you focus on batch inference you can keep the thermal load very steady by using buffering on the ground to smooth out submission spikes.<p>Ionising radiation isn't necessarily a problem. As they note, AI is non-deterministic anyway and software architectures are designed to resilient to transient computation errors. Only the deterministic parts like regular CPUs need to be rad-hardened. Because you aren't going to maintain the components anyway you get possibilities that wouldn't make sense on earth,<p>A focus on inference has yet another advantage w.r.t. heat management. For training right now the only game in town is either TPUs (not available to be put on a satellite) or Nvidia GPUs (designed for unlimited power/cooling availability). For inference though there is a wider range of chips available, some of which are designed for mobile use cases where energy and thermal efficiency is prime. You could even design your own ASICs that trade off latency vs energy usage to reduce your energy/cooling needs in space.<p>Finally, although heat management in space is classically dealt with using radiators, if launch costs get really low you could potentially consider alternative approaches like droplet radiators or by concentrating heat into physical materials that are then ejected over the oceans. Because the satellites are unmanned it opens up the possibility of using dangerous materials for cooling that wouldn't normally be reasonable, like hydrogen or liquid sodium. This would mean regular "recooling runs" but if launch costs get low enough, maybe that's actually feasible to imagine.