I'm confused about the level of conversation here. Can we actually run the math on heat dissipation and feasibility?
A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate. There are around 10K starlink satellites already in orbit, which means that the Starlink constellation is already effectively equivalent to a 50 Mega-watt (in a rough, back of the envelope feasibility way).
Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?
Why is starlink possible and other computations are not? Starlink is also already financially viable. Wouldn't it also become significantly cheaper as we improve our orbital launch vehicles?
Output from radiating heat scales with area it can dissipate from. Lots of small satellites have a much higher ratio than fewer larger satellites. Cooling 10k separate objects is orders of magnitude easier than 10 objects at 1000x the power use, even if the total power output is the same.
Distributing useful work over so many small objects is a very hard problem, and not even shown to be possible at useful scales for many of the things AI datacenters are doing today. And that's with direct cables - using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use and complexity for the communication in the first place.
Building data centres in the middle of the sahara desert is still much better in pretty much every metric than in space, be it price, performance, maintainance, efficiency, ease of cooling, pollution/"trash" disposal etc. Even things like communication network connectivity would be easier, as at the amounts of money this constellation mesh would cost you could lay new fibre optic cables to build an entire new global network to anywhere on earth and have new trunk connections to every major hub.
There are advantages to being in space - normally around increased visibility for wireless signals, allowing great distances to be covered at (relatively) low bandwidth. But that comes at an extreme cost. Paying that cost for a use case that simply doesn't get much advantages from those benefits is nonsense.
Simply put no, 50MW is not the typical hyperscaler cloud size. It's not even the typical single datacenter size.
A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.
When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.
And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.
But the focus on building giant monolithic datacenters comes from the practicalities of ground based construction. There are huge overheads involved with obtaining permits, grid connections, leveling land, pouring concrete foundations, building roads and increasingly often now, building a power plant on site. So it makes sense to amortize these overheads by building massive facilities, which is why they get so big.
That doesn't mean you need a gigawatt of power before achieving anything useful. For training, maybe, but not for inference which scales horizontally.
With satellites you need an orbital slot and launch time, and I honestly don't know how hard it is to get those, but space is pretty big and the only reasons for denying them would be safety. Once those are obtained done you can make satellite inferencing cubes in a factory and just keep launching them on a cadence.
I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).
* Everything is being irradiated all the time. Things need to be radiation hardened or shielded.
* Putting even 1kg into space takes vast amounts of energy. A Falcon 9 burns 260 MJ of fuel per kg into LEO. I imagine the embodied energy in the disposable rocket and liquid oxygen make the total number 2-3x that at least.
* Cooling is a nightmare. The side of the satellite in the sun is very hot, while the side facing space is incredibly cold. No fans or heat sinks - all the heat has to be conducted from the electronics and radiated into space.
* Orbit keeping requires continuous effort. You need some sort of hypergolic rocket, which has the nasty effect of coating all your stuff in horrible corrosive chemicals
* You can't fix anything. Even a tiny failure means writing off the entire system.
* Everything has to be able to operate in a vacuum. No electrolytic capacitors for you!
So I guess the question is - why bother? The only benefit I can think of is very short "days" and "nights" - so you don't need as much solar or as big a battery to power the thing. But that benefit is surely outweighed by the fact you have to blast it all into space? Why not just overbuild the solar and batteries on earth?
Almost none of the parent’s bullet points are solved by building on the Moon instead of in Earth orbit.
The energy demands of getting to the 240k mile Moon are IMMENSE compared to 100 mile orbit.
Ultimately, when comparing the 3 general locations, Earth is still BY FAR the most hospitable and affordable location until some manufacturing innovations drop costs by orders of magnitude. But those manufacturing improvements have to be made in the same jurisdiction that SpaceXAI is trying to avoid building data centers in.
This whole things screams a solution in search of a problem. We have to solve the traditional data center issues (power supply, temperature, hazard resilience, etc) wherever the data centers are, whether on the ground or in space. None of these are solved for the theoretical space data centers, but they are all already solved for terrestrial data centers.
Sounds more difficult. Not only is the moon further, you also need to use more fuel to land on it and you also have fine, abrasive dust to deal with. There’s no wind of course, but surely material will be stirred up and resettle based on all the landing activity.
And it’s still a vacuum with many of the same cooling issues. I suppose one upside is you could use the moon itself as a heat sink (maybe).
Yeah, carrying stuff 380k km and still deploying in vacuum (and super dusty ground) doesn't solve anything but adds cost and overhead. One day maybe, but not these next decades nor probably this century.
It would make more sense to develop power beaming technology. Use the knowledge from Starlink constellations to beam solar power via microwaves onto the rooftops of data centers
> I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).
You'd be wrong. There's a huge incentive to optimized radiator tech because of things like the international space station and MIR. It's a huge part of the deployment due to life having pretty narrow thermal bands. The added cost to deploy that tech also incentivizes hyper optimization.
Making bigger structures doesn't make that problem easier.
Fun fact, heat pipes were invented by NASA in the 60s to help address this very problem.
ISS and MIR combined are not a "large market". How many radiators they require? Probably a single space dc will demand a whole orders of magnitude more cooling
The ISS is a government project that's heading towards EOL, it has no incentive to heavily optimize anything because the people who built it don't get rich by doing so. SpaceX is what optimization looks like, not the ISS.
It's a private company, is profit motivated, and thus has reason to optimize. That was the parent poster's point.
Starship isn't largely a government project. It was planned a decade before the government was ever involved, they came along later and said "Hey, this even more incredible launch platform you're building? Maybe we can hire SpaceX to launch some things with it?"
Realistically, SpaceX launches far more payload than any government.
There is a lot of hand waiving away of the orders of magnitude more manufacturing, more launches, and more satellites that have to navigate around each other.
We still don’t have any plan I’ve heard of for avoiding a cascade of space debris when satellites collide and turn into lots of fast moving shrapnel. Yes, space is big, but low Earth orbit is a very tiny subset of all space.
The amount of propulsion satellites have before they become unable to maneuver is relatively small and the more satellite traffic there is, the faster each satellite will exhaust their propulsion gasses.
>There is a lot of hand waiving away of the orders of magnitude more manufacturing, more launches, and more satellites that have to navigate around each other.
This is exactly like the Boring Company plans to "speed up" boring. Lots of hand waving away decades of commercial boring, sure that their "great minds" can do 10x or 100x better than modern commercial applications. Elon probably said "they could just run the machines faster! I'm brilliant".
Could this be about bypassing government regulation and taxation? Silkroad only needed a tiny server, not 150kW.
The Outer Space Treaty (1967) has a loophole. If you launch from international waters (planned by SpaceX) and the equipment is not owned by a US-company or other legal entity there is significant legal ambiguity. This is Dogecoin with AI. Exploiting this accountability gap and creating a Grok AI plus free-speech platform in space sounds like a typical Elon endeavour.
Untrue. Responsible for any spacefaring vessel is in all cases the state the entity operating the vessel is registered in. If it's not SpaceX directly but a shell company in Ecuador carrying out the launch, Ecuador will be completely responsible for anything happening with and around the vessel, period. There are no loopholes in this system.
Good point - the comms satellites are not even "keeping" some of the energy, while a DC would. I _am_ now curious about the connection between bandwidth and wattage, but I'm willing to bet that less than 1% of the total energy dissipation on one of these DC satellites would be in the form of satellite-to-earth broadcast (keeping in mind that s2s broadcast would presumably be something of a wash).
I think you missed the point. If you have a 100 MW communicstion satellite and a 100 MW compute satellite those are very different beasts. The first might send 50% of the energy away as radio communication making it effectively a 50 MW satellitefor cooling purposes.
No, they didn't. You can't "send away" thermal energy via radio waves. At the temperatures we're talking about, thermal energy is in the infrared. That's blackbody radiation.
Is the SpaceX thin-foil cooling based on graphene real? Can experts check this out?
"SmartIR’s graphene-based radiator launches on SpaceX Falcon 9" [1]. This could be the magic behind this bet on heat radiation through exotic material. Lot of blog posts say impossible, expensive, stock pump, etc. Could this be the underlying technology breakthrough? Along with avoiding complex self-assembly in space through decentralization (1 million AI constellation, laser-grid comms).
This coating looks like it can selectively make parts of the satellite radiators or insulators, as to regulate temperature. But I don't think it can change the fundamental physics of radiating unwanted heat and that you can't do better than black body radiation.
Indeed, graphene seems capable of .99 of black body radiation limit.
Quote: "emissivity higher than 0.99 over a wide range of wavelengths". Article title "Perfect blackbody radiation from a graphene nanostructure" [1]. So several rolls of 10 x 50 meters graphene-coated aluminium foil could have significant cooling capability. No science-fiction needed anymore (see the 4km x 4km NVIDIA fantasy)
It's not as exciting as you think it is. "emissivity higher than 0.99 over a wide range of wavelengths" is basically code for "it's, like, super black"
The limiting factor isn't the emissivity, it's that you're having to rely on radiation as your only cooling mechanism. It's super slow and inefficient and it limits how much heat you can dissipate.
Like the other person said, you can't do any better than blackbody radiation (emissivity=1).
It's like this. Everything about operating a datacenter in space is more difficult than it is to operate one on earth.
1. The capital costs are higher, you have to expend tons of energy to put it into orbit
2. The maintenance costs are higher because the lifetime of satellites is pretty low
3. Refurbishment is next to impossible
4. Networking is harder, either you are ok with a relatively small datacenter or you have to deal with radio or laser links between satellites
For starlink this isn't as important. Starlink provides something that can't really be provided any other way, but even so just the US uses 176 terawatt-hours of power for data centers so starlink is 1/400th of that assuming your estimate is accurate (and I'm not sure it is, does it account for the night cycle?)
What about sourcing and the cost of energy? Solar Panels more efficient, no bad weather, and 100% in sunlight (depending on orbit) in space. Not that it makes up for the items you listed, but it may not be true that everything is more difficult in space.
Let's say with no atmosphere and no night cycle, a space solar panel is 5x better. Deploying 5x as many solar panels on the ground is still going to come in way under the budget of the space equivalent.
And it's not the same at all. 5x the solar panels on the ground means 5x the power output in the day, still 0 at night. So you'd need batteries. If you add in bad weather and winter, you may need battery capacity for days, weeks or even months, shifting the cost to batteries while still relying on nuclear of fossil backups in case your battery dies or some 3/4/5-sigma weather event outside what you designed for occurs.
> Or you put the data centers at different points on earth?
> Or you float them on the ocean circumnavigating the earth?
What that does have to do with anything? If you want to solar-power them, you still are subject to terrestrial effects. You can't just shut off a data center at night.
> Or we put the datacenters on giant Zeppelins orbiting above the clouds?
They'd have to fly at 50,000+ ft to be clear of clouds, I doubt you can lift heavy payloads this high using bouyancy given the low air density. High risk to people on the ground in case of failure because no re-entry.
> If we are doing fantasy tech solutions to space problems, why not for a million other more sensible options?
How is this a fantasy? With Starlink operational, this hardly seems a mere 'fantasy'.
A capacity problem can be solved by having another data center the other side of the earth.
If it's that the power cycling causes equipment to fail earlier, then that can be addressed far more easily than radiation hardening all equipment so that it can function in space.
just take cost of getting kg in space and compare it to how much solar panel will generate
Current satellites get around 150W/kg from solar panels. Cost of launching 1kg to space is ~$2000. So we're at $13.3(3)/Watt. We need to double it because same amount need to be dissipated so let's round it to $27
One NVidia GB200 rack is ~120kW. To just power it, you need to send $3 240 000 worth of payload into space. Then you need to send additional $3 106 000 (rack of them is 1553kg) worth of servers. Plus some extra for piping
Over 10 years ago, the best satellites had 500W/kg [2]. Modern solar panels that are designed to be light are at 200g per sqm [1]. That's 5sqm per kg. One sqm generates ca. 500W. So we're at 2.5kW per kg. Some people claim 4.3kW/kg possible.
Starship launch costs have a $100/kg goal, so we'd be at $40 / kW, or $4800 for a 120kW cluster.
120kW is 1GWh annually, costs you around $130k in Europe per year to operate. ROI 14 days. Even if launch costs aren't that low in the beginning and there's a lot more stuff to send up, your ROI might be a year or so, which is still good.
Solar panels in space are more efficient, but on the ground we have dead dinosaurs we can burn. The efficiency gain is also more than offset by the fact that you can't replace a worn out panel. A few years into the life of your satellite its power production drops.
Terrestrial data centers save money and recoup costs by salvaging and recycling components, so what you're saying here is that space-based datacenters are even less competitive than we previously estimated.
... if you completely ignore the difficulty of getting them up there. I'd be interested to see a comparison between the amount of energy required to get a solar panel into space, and the amount of energy it produces during its lifetime there. I wouldn't be surprised if it were a net negative; getting mass into orbit requires a tremendous amount of energy, and putting it there with a rocket is not an efficient process.
The cost might be the draw (if there is one). Big tech isn't afraid of throwing money at problems, but the AI folk and financiers are afraid of waiting and uncertainty. A satellite is crazy expensive but throwing more money at it gets you more satellites.
At the end of the day I don't really care either way. It ain't my money, and their money isn't going to get back into the economy by sitting in a brokerage portfolio. To get them to spend money this is as good a way as any other, I guess. At least it helps fund a little spaceflight and satellite R&D on the way.
>1. The capital costs are higher, you have to expend tons of energy to put it into orbit
putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K.
Add to that that the costs on Earth will only be growing, while costs in space will be falling.
Ultimately Starship's costs will come down to the bare cost of fuel + oxidizer, 20kg per 1kg in LEO, i.e. less than $10. And if they manage streamlined operations and high reuse. Yet even with $100/kg, it is still better in space than on the ground.
>That would make your solar panel (40kg) around $60K to put into space.
with the GPU costing the same, it would only double the capex.
>Even being generous and assuming you could get it to $100 per kg that's still $4000
noise compare to the main cost - GPUs.
>There's a lot of land in the middle of nowhere that is going to be cheaper than sending shit to space.
Cheapness of location of your major investment - GPUs - may as well happen to be secondary to other considerations - power/cooling capacity stable availability, jurisdiction, etc.
Can only speculate out of thin air - B200 and Ryzen 9950x made on the same process and have 11x difference in die size. 11 Ryzens would cost $6K, and with 200Gb RAM - $8K. Googling brings that the B200 cost or production is $6400. That matches the numbers from the Ryzen based estimate above (Ryzen numbers is retail, yet it has higher yield, so balance). So, i'd guess that given Google scale a TPU similar to B200 should be $6K-$10K.
This is the big thing, but Elon's child porn generator in orbit will be subject to US jurisdiction, just as much as if they were in Alaska. I guess he can avoid state law.
If jurisdiction is key, you can float a DC in international waters on a barge flying the flag of Panama or similar flag of convenience which you can pretty much buy at this scale. Pick a tin-pot country, fling a few million to the dictator, and you're set - with far less jurisdiction problems than a US, Russia, France launched satellite.
It is SpaceX/Elon who bet billions on that yadda-yadda, not me. I wrote "If" for $10/kg. I'm sure though that they would easily yadda-yadda under sub-$100/kg - which is $15M per flight. And even with those $100/kg the datacenters in space still make sense as comparable to ground based and providing the demand for the huge Starship launch capacity.
A datacenter costs ~$1000/ft^2. How much equipment per square foot is there? say 100kg (1 ton per rack plus hallway). Which is $1000 to put into orbit on Starship at $100/kg. At sub-$50/kg, you can put into orbit all the equipment plus solar panels and it would still be cheaper than on the ground.
It looks like you’re comparing the cost of installing solar panels on the ground with the cost of just transporting them to orbit. You can’t just toss raw solar panels out of a cargo bay.
> putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K.
What starship? The fantasy rocket Musk has been promising for 10 years or the real one that has thus far delivered only one banana worth of payload into orbit?
You are presented with a factual, verifiable, statement that starship has been promised for years and that all that's been delivered is something capable of sending a banana to LEO. Wayyyy overdue too.
You meet this with "well, once it works, it'll be amazing and you'll be queuing up"? How very very musky!
I have no idea if SpaceX will ever make the upper stage fully reusable. The space shuttle having existed isn't an existence proof, given the cost of repairs needed between missions.
However, with Starship SpaceX has both done more and less than putting a banana in orbit. Less, because it's never once been a true orbit; more, because these are learn-by-doing tests, all the reporting seems to be in agreement that it could already deliver useful mass to orbit if they wanted it to.
But without actually solving full reusability for the upper stage, this doesn't really have legs. Starship is cheap enough to build they can waste loads of them for this kind of testing, but not cheap enough for plans such as these to make sense if they're disposable.
The bean counters at NVidia recently upped the expected lifecycle from 5 years to 6. On paper, you are expected now to get 6 years out of a GPU for datacenter use, not 3-5.
Compare the cost of a RAD750 (the processor on the JWST) to its non rad hardened variant. Additionally, consider the processing power of that system to modern AI demands.
> The maintenance costs are higher because the lifetime of satellites is pretty low
Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...
If anything, considering this + limited satellite lifetime, it almost looks like a ploy to deal with the current issue of warehouses full of GPUs and the questions about overbuild with just the currently actively installed GPUs (which is a fraction of the total that Nvidia has promised to deliver within a year or two).
Just shoot it into space where it's all inaccessible and will burn out within 5 years, forcing a continuous replacement scheme and steady contracts with Nvidia and the like to deliver the next generation at the exact same scale, forever
> Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean
Hell, you're going to lose some fraction of chips to entropy every year. What if you could process those into reaction mass?
I believe that a modern GPU will burn out immediately. Chips for space are using ancient process nodes with chunky sized components so that they are more resilient to radiation. Deploying a 3nm process into space seems unlikely to work unless you surround it with a foot of lead.
Reminds me of the proposal to deorbit end of life satellites by puncturing their lithium batteries :)
The physics of consuming bits of old chip in an inefficient plasma thruster probably work, as do the crawling robots and crushers needed for orbital disassembly, but we're a few years away yet. And whilst on orbit chip replacement is much more mass efficient than replacing the whole spacecraft, radiators and all, it's also a nontrivial undertaking
This brings a whole new dimension to that joke about how our software used to leak memory, then file descriptors, then ec2 instances, and soon we'll be leaking entire data centers. So essentially you're saying - let's convert this into a feature.
> Everything about operating a datacenter in space is more difficult than it is to operate one on earth
Minus one big one: permitting. Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments.
But since building a datacenter almost anywhere on the planet is more convenient than outer space, surely you can find some suitable location/government. Or put it on a boat, which is still 100 times more sensible than outer space.
> since building a datacenter almost anywhere on the planet is more convenient than outer space, surely you can find some suitable location/government
More convenient. But I'm balancing the cost equation. There are regimes where this balances. I don't think we're there yet. But it's irrational to reject it completely.
> Or put it on a boat, which is still 100 times more sensible than outer space
Surely given starlinks 5ish year deorbit plan, you could design a platform to hold up for that long... And instead of burning the whole thing up you could just refurbish it when you swap out the actual rack contents, considering that those probably have an even shorter edge lifespan.
Starlinks are built to safely burn up on re-entry. A big reusable platform will have to work quite differently to never uncontrollably re-enter, or it might kill someone by high velocity debris on impact.
This adds weight and complexity and likely also forces a much higher orbit.
I can’t wait for all the heavy metals that are put into GPUs and other electronics showering down on us constantly. Wonder why the billionaires have their bunkers.
> If you think there is no papework necessary for launching satellites, you are very very wrong
I would be. And granted, I know a lot more about launching satellites than building anything. But it would take me longer to get a satellite in the air than the weeks it will take me to fix a broken shelf in my kitchen. And hyperscalers are connecting in months, not weeks.
> when he talks about subject outside of his domain
Hate to burst your bubble. But I have a background in aerospace engineering. I’ve financed stuff in this field, from launch vehicles to satellites. And I own stakes in a decent chunk of the plays in this field. Both for and against this hypothesis.
So yeah, I’ll hold my ground on having reasonable basis for being sceptical of blanket dismissals of this idea as much as I dismiss certainty in its success.
There are a lot of cheap shots around AI and aerospace. Some are coming from Musk. A lot are coming from one-liner pros. HN is pretty good at filtering those to get the good stuff, which is anyone doing real math.
That actually confirms what the other commenter said.
Your assertion was "Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments" and you haven't demonstrated any expertise is building data centers.
You've given a very extraordinary claim about DC costs, with no evidence presented, nor expertise cited to sway our priors.
> Your assertion was "Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments" and you haven't demonstrated any expertise is building data centers
I confirmed "I’ve financed stuff in this field, from launch vehicles to satellites. And I own stakes in a decent chunk of the plays in this field."
We're pseudonymous. But I've put more of my personal money to work around hyperscalers, by a mean multiplier of 10 ^ 9, over the troll who's a walking Gell-Mann syndrome.
I'm engaging because I want to challenge my views. Reddit-style hot takes are not that.
I mean, you don't have zoning in space, but you have things like international agreements to avoid, you know, catastrophic human development situations like kessler syndrome.
All satellites launched into orbit these days are required to have de-orbiting capabilities to "clean up" after EOL.
I dunno, two years ago I would have said municipal zoning probably ain't as hard to ignore as international treaties, but who the hell knows these days.
So what? Why is it important to have 24/7 solar, that you cannot have on the ground? On the ground level you have fossil fuels.
I wonder if you were thinking about muh emissions for a chemical rocket launched piece of machinery containing many toxic metals to be burnt up in the air in 3-5 years... It doesn't sound more environmentally friendly.
Parent just means "a lot" and is using 90% to convey their opinion. The actual numbers are closer to 0.083%[1][2][3][4] and parent thinks they should be 0.01-0.1% of the total build cost.
1. Assuming 500,000 USD in permitting costs. See 2.
2. Permits and approvals: Building permits, environmental assessments, and utility connection fees add extra expenses. In some jurisdictions, the approval process alone costs hundreds of thousands of dollars. https://www.truelook.com/blog/data-center-construction-costs
3. Assuming a 60MW facility at $10M/MW. See 4.
4. As a general rule, it costs between $600 to $1,100 per gross square foot or $7 million to $12 million per megawatt of commissioned IT load to build a data center. Therefore, if a 700,000-square foot, 60-megawatt data center were to be built in Northern Virginia, the world’s largest data center market, it would cost between $420 million and $770 million to construct the facility, including its powered shell and equipping the building with the appropriate electrical systems and HVAC components. https://dgtlinfra.com/how-much-does-it-cost-to-build-a-data-...
He said bullshit budget, not budget. He's thinking about opportunity and attention costs, not saying that permits literally have a higher price tag than GPUs.
> Source? I can't immediately find anything like that
I’ve financed two data centers. Most of my time was spent over permitting. If I tracked it minute by minute, it may be 70 to 95%. But broadly speaking, if I had to be told about it before it was solved, it was (a) a real nuisance and (b) not technical.
that may have been the case before but it is not anymore. I live in Northern VA, the capital of the data centers and it is easier to build one permit-wise than a tree house. also see provisions in OBBB
This is a huge one. What Musk is looking for is freedom from land acquisition. Everything else is an engineering and physics problem that he will somehow solve. The land acquisition problem is out of his hands and he doesn't want to deal with politicians. He learned from building out the Memphis DC.
So why does he not build here in Europe then? Getting a permit for building a data center in Sweden is just normal industrial zoning that anyone can get for cheap, there is plenty of it. Only challenge is getting enough electricity.
I meant Europe is an example of how not to do regulation. The problem you just mentioned. If you get land easily electricity won't be available and vice versa.
Then maybe you should move here. We have in most cases well functioning regulations. Of course there are counter examples where it has been bad but data centers is not one of them. It is easy to get permits to build one.
There's also a bunch of countries pretty much begging companies to come and build solar arrays. If you rocked up in Australia and said "I'm building a zero-emission data center we'll power from PV" we'd pretty much fall over ourselves to let you do it. Plus you know, we have just a bonkers amount of land.
There is already a Tesla grid levelling battery in South Australia. If what you're really worried about is regulations making putting in the renewable energu expensive, then boy have I got a geopolitically stable, tectonically stable, first-world country where you can do it.
Where a random malicious president can't just hijack the government and giga-companies can't trivially lobby lawmakers for profits at the expense of citizens?
Maybe, but I'm skeptical, because current DCs are not designed to minimize footprint. Has anyone even built a two-story DC? Obviously cooling is always an issue, but not, directly, land.
Now that I think of it, a big hydro dam would be perfect: power and cooling in one place.
Downtown Los Angeles: The One Wilshire building, which is the worlds most connected building. There are over twenty floors of data centers. I used Corporate Colo which was a block or two away. That building had at least 10 floors of Data Centers.
I think Downtown Seattle has a bunch too (including near Amazon campus). I just looked up one random one and they have about half the total reported building square footage of a 10-story building used for a datacenter: https://www.datacenters.com/equinix-se3-seattle
Amazon’s new campus in Indiana is expected to use 2.2GW when complete. 50Mw is nothing, and that’s ignoring the fact that most of that power wouldn't actually be used for compute.
This is the main point I think. I am very much convinced that SpaceX is capbable to put a datacenter into space. I am not convinced they can do it cheaper than building a datacenter on earth.
I would be a lot more convinced they had found a way to solve the unit economics if it was being used to secure billion dollar deposits from other companies rather than as the narrative for rolling a couple of Elon's loss making companies into SpaceX and IPOing...
> Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?
xAI’s first data center buildout was in the 300MW range and their second is in the Gigawatt range. There are planned buildouts from other companies even bigger than that.
So data center buildouts in the AI era need 1-2 orders of magnitude more power and cooling than your 50MW estimate.
Even a single NVL72 rack, just one rack, needs 120kW.
I would assume such a setup involves multiple stages of heat pumps to from GPU to 1400C radiatoe. Obviously that's going to impact efficiency.
Also I'm not seriously suggesting that 1400C radiators is a reasonable approach to cooling a space data centre. It's just intended to demonstrate how infeasible the idea is.
Because 10K satellites have a FAR greater combined surface area than a single space-borne DC would. Stefan-Boltzman law: ability to radiate heat increase to the 4th power of surface area.
Also worth noting that if computing power scales with volume then surface area (and thus radiation) scales like p^2/3. In other words, for a fixed geometry, the required heat dissipation per unit area goes like p^1/3. This is why smaller things can just dissipate heat from their surface, whereas larger things require active cooling.
50MW is on the small side for an AI cluster - probably less than 50k gpus.
if the current satellite model dissipates 5kW, you can't just add a GPU (+1kW). maybe removing most of the downlink stuff lets you put in 2 GPUs? so if you had 10k of these, you'd have a pretty high-latency cluster of 20k GPUs.
I'm not saying I'd turn down free access to it, but it's also very cracked. you know, sort of Howard Hughesy.
A Starlink satellite is mainly just receiving and sending data, the bare minimum of a data center-satellite's abilities; everything else comes on top and would be the real power drain.
> A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate.
This isn't quite true. It's very possible that the majority of that power is going into the antennas/lasers which technically means that the energy is being dissipated, but it never became heat in the first place. Also, 5KW solar power likely only means ~3kw of actual electrical consumption (you will over-provision a bit both for when you're behind the earth and also just for safety margin).
> Why is starlink possible and other computations are not?
Aside from the point others have made that 50 MW is small in the context of hyperscalers, if you want to do things like SOTA LLM training, you can't feasibly do it with large numbers of small devices.
Density is key because of latency - you need the nodes to be in close physical proximity to communicate with each other at very high speeds.
For training an LLM, you're ideally going to want individual satellites with power delivery on the order of at least about 20 MW, and that's just for training previous-generation SOTA models. That's nearly 5,000 times more power than a single current Starlink satellite, and nearly 300 times that of the ISS.
You'd need radiator areas in the range of tens of thousands of square meters to handle that. Is it theoretically technically possible? Sure. But it's a long-term project, the kind of thing that Musk will say takes "5 years" that will actually take many decades. And making it economically viable is another story - the OP article points out other issues with that, such as handling hardware upgrades. Starlink's current model relies on many cheap satellites - the equation changes when each one is going to be very, very expensive, large, and difficult to deploy.
Sure, we can run the math on heat dissipation. The law of Stefan-Boltzman is free and open source and it application is high school level physics. You talk about 50 MW. You are going to need a lot of surface area to radiate that off at somewhere close to reasonable temperatures.
Openrouter is a decent proxy for real world use and Grok is currently 8% of the market: https://openrouter.ai/rankings (and is less than 7% of TypeScript programming)
Grok is losing pretty spectacularly on the user / subscriber side of things.
They have no path to paying for their existence unless they drastically increase usage. There aren't going to be very many big winners in this segment and xAI's expenses are really really big.
I really wonder what will happen when the AI companies can no longer set fire to piles of investor money, and have to transition to profitability or at least revenue neutrality - as that would entail dramatically increasing prices.
Is the plan to have everyone so hopelessly dependent on their product that they grit their teeth and keep on paying?
Think about the stock return over a period - its composed of capital gains and dividends.
Now what happens capital gains disappears and perhaps turns into capital losses? Dividends have to go higher.
What does this mean? Less retained earnings / cashflows that can be re-invested.
Apple is the only one that will come out of this OK. The others will be destroyed for if they dont return cash, the cash balance will be discounted leading to a further reduction in the value of equity. The same thing that happened to Zuckerberg and Meta with the Metaverse fiasco.
Firms in the private sphere will go bust/acquired.
> Now what happens capital gains disappears and perhaps turns into capital losses? Dividends have to go higher
This is not how corporate finance works. Capital gains and losses apply to assets. And only the most disciplined companies boost dividends in the face of decline—most double down and try to spend their way back to greatness.
It'll be a combination of advertising and subscription fees, and there will only be a few big winners.
Gemini is practically guaranteed. With the ad model already primed, their financial resources, their traffic to endlessly promote Gemini (ala Chrome), their R&D capabilities around AI, their own chips, crazy access to training data, and so on - they'd have to pull the ultimate goof to mess up here.
Microsoft is toast, short of a miracle. I'd bet against Office and Windows here. As Office goes down, it's going to take Windows down with it. The great Office moat is about to end. The company struggles, the stock struggles, Azure gets spun off (unlock value, institutional pressure), Office + Windows get spun off - the company splits into pieces. The LLMs are an inflection point for Office and Microsoft is super at risk, backwards regarding AI and they're slow. The OpenAI pursuit as it was done, was a gigantic mistake for Microsoft - one of the dumbest strategies in the history of tech, it left them with their pants down. Altman may have killed a king by getting him to be complacent.
Grok is very unlikely to make it (as is). The merger with SpaceX guarantees its death as a competitor to GPT/Gemini/Claude, it's over. Maybe they'll turn Grok into something useful to SpaceX. More likely they'll slip behind and it'll die rapidly like Llama. The merger is because they see the writing on the wall, this is a bailout to the investors (not named Elon) of xAI, as the forced Twitter rollup was a bailout for the investors of Twitter.
Claude is in a weird spot. What they have is not worth $300-$500 billion. Can they figure out how to build a lot more value out of what they have today (and get their finances sustainable), before the clock runs out? Or do they get purchased by Meta, Microsoft, etc.
OpenAI has to rapidly roll out the advertising model and get the burn rate down to meaningless levels, so they're no longer dependent on capital markets for financing (that party is going to end suddenly).
Meta is permanently on the outside looking in. They will never field an in-house competitor to GPT or Gemini that can persistently keep up. Meta doesn't know what it is or why it should be trying to compete with GPT/Gemini/Claude. Their failure (at this) is already guaranteed. They should just acquire GPT 4o and let their aging userbase on FB endlessly talk itself into the grave for the next 30 years while clicking ads.
If Amazon knew what they were doing (they don't right now), they would: immediately split retail + ads and AWS. The ad business ensures that the retail business will continue to thrive and would be highly lucrative. Then have AWS purchase Anthropic when valuations drop, bolt it on to AWS everything. Far less of an anti-trust issue than if what is presently known as Amazon attempted it here and now. Anthropic needs to build a lot on to itself to sustain itself and justify its valuation, AWS already has the answer to that.
If valuations plunge, and OpenAI is not yet sustainable, Microsoft should split itself into pieces and have the Windows-Office division purchase OpenAI as their AI option. It'd be their only path to avoiding anti-trust blocking that acquisition. As is Microsoft would not be allowed to buy OpenAI. Alternatively Microsoft can take a shot at acquiring Anthropic at some point - this seems likely given the internal usage going on at Redmond, the primary question is anti-trust (but in this case, Anthropic is viewed as the #3, so Microsoft would argue it bolsters competition with GPT & Gemini).
Why do you say Amazon doesn't know what they are doing? I think among those mentioned, they are the best positioned alongside Apple in the grander schema of things.
Also you say meta will never field a competitor to GPT - but they did llama; not as a commercial product, but probably an attempt at it (and failed). Otherwise agreed.
"Gemini is practically guaranteed. With the ad model already primed, their financial resources, their traffic to endlessly promote Gemini (ala Chrome), their R&D capabilities around AI, their own chips, crazy access to training data, and so on - they'd have to pull the ultimate goof to mess up here"
Im not convinced on this TBH in the long-run. Google is seemingly a pure play technology firm that has to make products for the sake of it, else the technology is not accessible/usable. Does that mean they are at their core a product firm? Nah. Thats always been Apple's core thing, along side superior marketing.
One only has to compare Google's marketing of the Pixel phone to Apple - it does not come close. Nobody connects with Google's ads, the way they do with Apple. Google has a mountain to climb and has to compensate the user tremendously for switching.
Apple will watch the developments keenly and figure out where they can take advantage of the investments others have made. Hence the partnerships et al with Google.
Merging with SpaceX means they don't have to pay for their existence. Anyway they're probably positioned better than any other AI player except maybe Gemini.
I don’t follow why merging with SpaceX means they don’t have to pay for their existence. Someone does. Presumably now that is SpaceX. What is SpaceX’s revenue?
Maybe the idea is that SpaceX has access to effectively unlimited money through the US Government, either via ongoing lucrative contracts, or likely bailouts if needed. The US Govt wouldn't bail out xAI but they would bail out SpaceX if they are in financial trouble.
I found mistakes in the spreadsheet backing up 2 published articles (corporate governance). The (tenured Ivy) professor responded by paying me (after I’d graduated) to write a comprehensive working paper that relied on a fixed spreadsheet and rebutted the articles.
Talk to a VC as well. Or more than one, and ideally ones that are in the medical device space and have done similar deals with others from your university. If they are interested then they can apply pressure on the university too, to get to a solution that works for everyone.
The New Zealand Active Investor Plus resident program requires $5m NZD, which is under $3m USD, but that would take everything. There is another program mooted where you buy a business for less than that.
NZ’s Active Investor Plus program is more like EB-5 than this. AIP requires that migrants invest their funds, not donate them.
The Growth category requires fewer residency days and a NZ$5m (~US$3m) investment in “growth” companies or funds, including VC funds and companies that VC funds invest with.
The Balanced category requires double the investment and has a wider range of asset classes, but also a longer duration and higher number of days of residency required.
If you copy the generated url and put it into the entry field (and repeat) then you end up at a bitcoin site. As Bubblerings has pointed out that has malware.
> If you copy the generated url and put it into the entry field (and repeat) then you end up at a bitcoin site.
Uh, what? I just tried it a few times, and it seems to just follow the redirect each time, always ending up back at the original target URL I entered. How many times did you have to "repeat" to make that happen?
> As Bubblerings has pointed out that has malware.
No, that's not what BubbleRings said. BubbleRings said one site on VirusTotal reported it was malware. That sounds like a false positive because the URL is fishy, which is the entire point of the joke here.
It probably increases Elon's share of the combined entity.
It delivers on a promise to investors that he will make money for them, even as the underlying businesses are lousy.
reply