Hacker Newsnew | past | comments | ask | show | jobs | submit | lancewiggs's commentslogin

It's exiting the 5th best social network and the 10th (or worse) best AI company and selling them to a decent company.

It probably increases Elon's share of the combined entity.

It delivers on a promise to investors that he will make money for them, even as the underlying businesses are lousy.


I'm confused about the level of conversation here. Can we actually run the math on heat dissipation and feasibility?

A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate. There are around 10K starlink satellites already in orbit, which means that the Starlink constellation is already effectively equivalent to a 50 Mega-watt (in a rough, back of the envelope feasibility way).

Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?

Why is starlink possible and other computations are not? Starlink is also already financially viable. Wouldn't it also become significantly cheaper as we improve our orbital launch vehicles?


Simply put no, 50MW is not the typical hyperscaler cloud size. It's not even the typical single datacenter size.

A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.

When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.

And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.


But the focus on building giant monolithic datacenters comes from the practicalities of ground based construction. There are huge overheads involved with obtaining permits, grid connections, leveling land, pouring concrete foundations, building roads and increasingly often now, building a power plant on site. So it makes sense to amortize these overheads by building massive facilities, which is why they get so big.

That doesn't mean you need a gigawatt of power before achieving anything useful. For training, maybe, but not for inference which scales horizontally.

With satellites you need an orbital slot and launch time, and I honestly don't know how hard it is to get those, but space is pretty big and the only reasons for denying them would be safety. Once those are obtained done you can make satellite inferencing cubes in a factory and just keep launching them on a cadence.

I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).


New GPU dense racks are going up to 300kW, but I believe the normal at moment for hyperscalers is somewhere around ~150kW, can someone confirm?

The energy demand of these DCs is monstrous, I seriously can't imagine something similar being deployed in orbit...


How much of that power is radiated as the radio waves it sends?

Good point - the comms satellites are not even "keeping" some of the energy, while a DC would. I _am_ now curious about the connection between bandwidth and wattage, but I'm willing to bet that less than 1% of the total energy dissipation on one of these DC satellites would be in the form of satellite-to-earth broadcast (keeping in mind that s2s broadcast would presumably be something of a wash).

I doubt half the power is to the transmitter, and radio efficiency is poor -- 20% might be a good starting point.

Is the SpaceX thin-foil cooling based on graphene real? Can experts check this out?

"SmartIR’s graphene-based radiator launches on SpaceX Falcon 9" [1]. This could be the magic behind this bet on heat radiation through exotic material. Lot of blog posts say impossible, expensive, stock pump, etc. Could this be the underlying technology breakthrough? Along with avoiding complex self-assembly in space through decentralization (1 million AI constellation, laser-grid comms).

[1] https://www.graphene-info.com/smartir-s-graphene-based-radia...


Entirely depends on band, at 10GHz more like 40%, at lower frequencies more, for example FM band can even go to 70%

the majority is likely in radio waves and the inter satellite laser communication

Inter sat comms cancels out - every kw sent by one sat is received by another.

It doesn't, because the beams are not so tight that they all fall on the target satellite, and not all of that is absorbed :P

For another reference, the Nvidia-OpenAI deal is reportedly 10GW worth of DC.

It's like this. Everything about operating a datacenter in space is more difficult than it is to operate one on earth.

1. The capital costs are higher, you have to expend tons of energy to put it into orbit

2. The maintenance costs are higher because the lifetime of satellites is pretty low

3. Refurbishment is next to impossible

4. Networking is harder, either you are ok with a relatively small datacenter or you have to deal with radio or laser links between satellites

For starlink this isn't as important. Starlink provides something that can't really be provided any other way, but even so just the US uses 176 terawatt-hours of power for data centers so starlink is 1/400th of that assuming your estimate is accurate (and I'm not sure it is, does it account for the night cycle?)


What about sourcing and the cost of energy? Solar Panels more efficient, no bad weather, and 100% in sunlight (depending on orbit) in space. Not that it makes up for the items you listed, but it may not be true that everything is more difficult in space.

Let's say with no atmosphere and no night cycle, a space solar panel is 5x better. Deploying 5x as many solar panels on the ground is still going to come in way under the budget of the space equivalent.

And it's not the same at all. 5x the solar panels on the ground means 5x the power output in the day, still 0 at night. So you'd need batteries. If you add in bad weather and winter, you may need battery capacity for days, weeks or even months, shifting the cost to batteries while still relying on nuclear of fossil backups in case your battery dies or some 3/4/5-sigma weather event outside what you designed for occurs.

That's with current launch costs, right? Nobody is claiming it's economic without another huge fall in launch costs, but that's what SpaceX is doing.

just take cost of getting kg in space and compare it to how much solar panel will generate

Current satellites get around 150W/kg from solar panels. Cost of launching 1kg to space is ~$2000. So we're at $13.3(3)/Watt. We need to double it because same amount need to be dissipated so let's round it to $27

One NVidia GB200 rack is ~120kW. To just power it, you need to send $3 240 000 worth of payload into space. Then you need to send additional $3 106 000 (rack of them is 1553kg) worth of servers. Plus some extra for piping


Solar panels in space are more efficient, but on the ground we have dead dinosaurs we can burn. The efficiency gain is also more than offset by the fact that you can't replace a worn out panel. A few years into the life of your satellite its power production drops.

If they plan to put this things in a low orbit their useful life before reentry is low anyway.

A quick search gave me a lifespan of around 5 years for a starlink satellite.

If you put in orbit a steady stream of new satellites every year maintenance is not an issue, you just stop using worn out or broken ones.


> Solar panels in space are more efficient...

... if you completely ignore the difficulty of getting them up there. I'd be interested to see a comparison between the amount of energy required to get a solar panel into space, and the amount of energy it produces during its lifetime there. I wouldn't be surprised if it were a net negative; getting mass into orbit requires a tremendous amount of energy, and putting it there with a rocket is not an efficient process.


My sketchy napkin math gives an order of magnitude of a few months of panel output to get it in space.

5kg, 500W panel (don’t exactly know what the ratio is for a panel plus protection and frame for space, might be a few times better than this)

Say it produces about 350kWh per month before losses.

Mass to LEO is something like 10x the weight in fuel alone, so that’s going to be maybe 500kWh. Plus cryogenics etc.

So not actually that bad


I'm stretched to think of one thing that is easier in space. Anything I could imagine still requires getting there (in one piece)

Death, and some science. That's it?

Horseshoes.

Achieving a zero-gravity environment, or a vacuum?

The cost might be the draw (if there is one). Big tech isn't afraid of throwing money at problems, but the AI folk and financiers are afraid of waiting and uncertainty. A satellite is crazy expensive but throwing more money at it gets you more satellites.

At the end of the day I don't really care either way. It ain't my money, and their money isn't going to get back into the economy by sitting in a brokerage portfolio. To get them to spend money this is as good a way as any other, I guess. At least it helps fund a little spaceflight and satellite R&D on the way.


>1. The capital costs are higher, you have to expend tons of energy to put it into orbit

putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K. Add to that that the costs on Earth will only be growing, while costs in space will be falling.

Ultimately Starship's costs will come down to the bare cost of fuel + oxidizer, 20kg per 1kg in LEO, i.e. less than $10. And if they manage streamlined operations and high reuse. Yet even with $100/kg, it is still better in space than on the ground.

And for cooling that people so complain about without running it in calculator - https://news.ycombinator.com/item?id=46878961

>2. The maintenance costs are higher because the lifetime of satellites is pretty low

it will live those 3-5 years of the GPU lifecycle.


Current cost to LEO is $1500 per kg

That would make your solar panel (40kg) around $60K to put into space.

Even being generous and assuming you could get it to $100 per kg that's still $4000

There's a lot of land in the middle of nowhere that is going to be cheaper than sending shit to space.


>That would make your solar panel (40kg) around $60K to put into space.

with the GPU costing the same, it would only double the capex.

>Even being generous and assuming you could get it to $100 per kg that's still $4000

noise compare to the main cost - GPUs.

>There's a lot of land in the middle of nowhere that is going to be cheaper than sending shit to space.

Cheapness of location of your major investment - GPUs - may as well happen to be secondary to other considerations - power/cooling capacity stable availability, jurisdiction, etc.


> with the GPU costing the same, it would only double the capex.

Yes, only doubling the capex. With the benefits of, hmm, no maintenance access and awful networking?


Any idea, what is the estimated cost of a Google TPU. It may not make sense for Nvidia retail price but at cost price of Google.

Can only speculate out of thin air - B200 and Ryzen 9950x made on the same process and have 11x difference in die size. 11 Ryzens would cost $6K, and with 200Gb RAM - $8K. Googling brings that the B200 cost or production is $6400. That matches the numbers from the Ryzen based estimate above (Ryzen numbers is retail, yet it has higher yield, so balance). So, i'd guess that given Google scale a TPU similar to B200 should be $6K-$10K.

1 KW of solar panels is 150€ retail right now. You are probably at 80€ or less if you buy a few MW.

(I'm ignoring installation costs etc. because actually creating the satellites is ignored here, too)


installation of large solar plants is largely automated already

> will come down to the bare cost of fuel + oxidizer

And maintenance and replacing parts and managing flights and ... You're trying to yadda-yadda so much opex here!


It is SpaceX/Elon who bet billions on that yadda-yadda, not me. I wrote "If" for $10/kg. I'm sure though that they would easily yadda-yadda under sub-$100/kg - which is $15M per flight. And even with those $100/kg the datacenters in space still make sense as comparable to ground based and providing the demand for the huge Starship launch capacity.

A datacenter costs ~$1000/ft^2. How much equipment per square foot is there? say 100kg (1 ton per rack plus hallway). Which is $1000 to put into orbit on Starship at $100/kg. At sub-$50/kg, you can put into orbit all the equipment plus solar panels and it would still be cheaper than on the ground.


It looks like you’re comparing the cost of installing solar panels on the ground with the cost of just transporting them to orbit. You can’t just toss raw solar panels out of a cargo bay.

>You can’t just toss raw solar panels out of a cargo bay.

That is exactly what you do - just like with Starlink - toss out the panels with attached GPUs, laser transmitter and small ion drive.


100 x 100 is 10,000.

> it is SpaceX/Elon

The known scammer guy? Like these ideas wouldn't pass the questions at the end of a primary school presentation.


> putting 1KW of solar on land - $2K, putting it into orbit on Starship (current ground-based heavy solar panels, 40kg for 4m2 of 1KW in space) - anywhere between $400 and $4K.

What starship? The fantasy rocket Musk has been promising for 10 years or the real one that has thus far delivered only one banana worth of payload into orbit?


it is obviously predicated on Starship. All these discussions have no sense otherwise.

> or the real one that has thus far delivered only one banana worth of payload into orbit?

once it starts delivering real payloads, the time for discussions will be no more, it will be time to rush to book your payload slot.


You are presented with a factual, verifiable, statement that starship has been promised for years and that all that's been delivered is something capable of sending a banana to LEO. Wayyyy overdue too.

You meet this with "well, once it works, it'll be amazing and you'll be queuing up"? How very very musky!

What a cult.


The bean counters at NVidia recently upped the expected lifecycle from 5 years to 6. On paper, you are expected now to get 6 years out of a GPU for datacenter use, not 3-5.

To add space solar cell will weigh only 4-12kg as protection requirements are different.

source?

> The maintenance costs are higher because the lifetime of satellites is pretty low

Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean...


> Presumably they're planning on doing in-orbit propellant transfer to reboost the satellites so that they don't have to let their GPUs crash into the ocean

Hell, you're going to lose some fraction of chips to entropy every year. What if you could process those into reaction mass?


I believe that a modern GPU will burn out immediately. Chips for space are using ancient process nodes with chunky sized components so that they are more resilient to radiation. Deploying a 3nm process into space seems unlikely to work unless you surround it with a foot of lead.

Or cooling water/oil?

This brings a whole new dimension to that joke about how our software used to leak memory, then file descriptors, then ec2 instances, and soon we'll be leaking entire data centers. So essentially you're saying - let's convert this into a feature.

Another significant factor is that radiation makes things worse.

Ionizing radiation disrupts the crystalline structure of the semiconductor and makes performance worse over time.

High energy protons randomly flip bits, can cause latchup, single event gate rupture, destroy hardware immediately, etc.


If anything, considering this + limited satellite lifetime, it almost looks like a ploy to deal with the current issue of warehouses full of GPUs and the questions about overbuild with just the currently actively installed GPUs (which is a fraction of the total that Nvidia has promised to deliver within a year or two).

Just shoot it into space where it's all inaccessible and will burn out within 5 years, forcing a continuous replacement scheme and steady contracts with Nvidia and the like to deliver the next generation at the exact same scale, forever


And just like that you've added another not never done before, and definitely not at scale problem to the mix.

These are all things which add weight, complexity and cost.

Propellant transfer to an orbital Starship hasn't even been done yet and that's completely vital to it's intended missions.


Or maybe they want to just use them hard and deorbit them after three yesrs?

"Planning" is a strong word..

> Everything about operating a datacenter in space is more difficult than it is to operate one on earth

Minus one big one: permitting. Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments.


But since building a datacenter almost anywhere on the planet is more convenient than outer space, surely you can find some suitable location/government. Or put it on a boat, which is still 100 times more sensible than outer space.

> since building a datacenter almost anywhere on the planet is more convenient than outer space, surely you can find some suitable location/government

More convenient. But I'm balancing the cost equation. There are regimes where this balances. I don't think we're there yet. But it's irrational to reject it completely.

> Or put it on a boat, which is still 100 times more sensible than outer space

More corrosion. And still, interconnects.


> More corrosion

Surely given starlinks 5ish year deorbit plan, you could design a platform to hold up for that long... And instead of burning the whole thing up you could just refurbish it when you swap out the actual rack contents, considering that those probably have an even shorter edge lifespan.


Starlinks are built to safely burn up on re-entry. A big reusable platform will have to work quite differently to never uncontrollably re-enter, or it might kill someone by high velocity debris on impact.

This adds weight and complexity and likely also forces a much higher orbit.


Hopefully a sea platform does not end up flying into space all of its own, only to crash and burn back down.

Maybe the AI workloads running on it achieve escape velocity? ;)


I can’t wait for all the heavy metals that are put into GPUs and other electronics showering down on us constantly. Wonder why the billionaires have their bunkers.

Yeah, "burn up safely on reentry".

100 years later: "why does everything taste like cadmium?"


If you think there is no papework necessary for launching satellites, you are very very wrong.

> If you think there is no papework necessary for launching satellites, you are very very wrong

I would be. And granted, I know a lot more about launching satellites than building anything. But it would take me longer to get a satellite in the air than the weeks it will take me to fix a broken shelf in my kitchen. And hyperscalers are connecting in months, not weeks.


Swear that fella is like the Elon Musk of HN - when he talks about subject outside of his domain he gets caught out.

> when he talks about subject outside of his domain

Hate to burst your bubble. But I have a background in aerospace engineering. I’ve financed stuff in this field, from launch vehicles to satellites. And I own stakes in a decent chunk of the plays in this field. Both for and against this hypothesis.

So yeah, I’ll hold my ground on having reasonable basis for being sceptical of blanket dismissals of this idea as much as I dismiss certainty in its success.

There are a lot of cheap shots around AI and aerospace. Some are coming from Musk. A lot are coming from one-liner pros. HN is pretty good at filtering those to get the good stuff, which is anyone doing real math.


That actually confirms what the other commenter said.

Your assertion was "Every datacentre I know going up right now is spending 90% of their bullshit budget on battlig state and local governments" and you haven't demonstrated any expertise is building data centers.

You've given a very extraordinary claim about DC costs, with no evidence presented, nor expertise cited to sway our priors.


Sounds very over-compensating that. Musk-type behaviour

It's also infinitly easier to get 24/7 unadulterated sunlight for your solar panels.

Not 24/7 in low earth orbit, but perhaps at an earth-moon or earth-sun L4/L5 lagrange point. Though with higher latency to earth.

So what? Why is it important to have 24/7 solar, that you cannot have on the ground? On the ground level you have fossil fuels.

I wonder if you were thinking about muh emissions for a chemical rocket launched piece of machinery containing many toxic metals to be burnt up in the air in 3-5 years... It doesn't sound more environmentally friendly.


I mean, you don't have zoning in space, but you have things like international agreements to avoid, you know, catastrophic human development situations like kessler syndrome.

All satellites launched into orbit these days are required to have de-orbiting capabilities to "clean up" after EOL.

I dunno, two years ago I would have said municipal zoning probably ain't as hard to ignore as international treaties, but who the hell knows these days.


> you have things like international agreements to avoid, you know, catastrophic human development

Yes. These are permitted in weeks for small groups, days for large ones. (In America.)

Permitting is a legitimate variable that weighs in favor of in-space data centers.


> is spending 90% of their bullshit budget on battlig state and local governments

Source? I can't immediately find anything like that.


Parent just means "a lot" and is using 90% to convey their opinion. The actual numbers are closer to 0.083%[1][2][3][4] and parent thinks they should be 0.01-0.1% of the total build cost.

1. Assuming 500,000 USD in permitting costs. See 2.

2. Permits and approvals: Building permits, environmental assessments, and utility connection fees add extra expenses. In some jurisdictions, the approval process alone costs hundreds of thousands of dollars. https://www.truelook.com/blog/data-center-construction-costs

3. Assuming a 60MW facility at $10M/MW. See 4.

4. As a general rule, it costs between $600 to $1,100 per gross square foot or $7 million to $12 million per megawatt of commissioned IT load to build a data center. Therefore, if a 700,000-square foot, 60-megawatt data center were to be built in Northern Virginia, the world’s largest data center market, it would cost between $420 million and $770 million to construct the facility, including its powered shell and equipping the building with the appropriate electrical systems and HVAC components. https://dgtlinfra.com/how-much-does-it-cost-to-build-a-data-...


Yeah, I was trying to be nicer than "you're making it up" just in case someone has the actual numbers.

He said bullshit budget, not budget. He's thinking about opportunity and attention costs, not saying that permits literally have a higher price tag than GPUs.

> Source? I can't immediately find anything like that

I’ve financed two data centers. Most of my time was spent over permitting. If I tracked it minute by minute, it may be 70 to 95%. But broadly speaking, if I had to be told about it before it was solved, it was (a) a real nuisance and (b) not technical.


Unless you're the single largest cost, your personal time says nothing about actual DC costs, does it?

Just admit it was hyperbole.


that may have been the case before but it is not anymore. I live in Northern VA, the capital of the data centers and it is easier to build one permit-wise than a tree house. also see provisions in OBBB

What counts towards a bullshit budget? Permitting is a drop in the bucket compared to construction costs.

This is a huge one. What Musk is looking for is freedom from land acquisition. Everything else is an engineering and physics problem that he will somehow solve. The land acquisition problem is out of his hands and he doesn't want to deal with politicians. He learned from building out the Memphis DC.

He "learned" by illegally poisoning black people

> an engineering and physics problem that he will somehow solve

no he won't


What ? This is Hacker News man. Talk substance. Not some rage baiting nonsense.


Thank you. This is really nasty. Boxtown residents should sue xAI and take them to court.

So freedom from law and regulation?

Well let's face it. Not all law and regulation is created equal. Look at Europe.

Where a random malicious president can't just hijack the government and giga-companies can't trivially lobby lawmakers for profits at the expense of citizens?

So why does he not build here in Europe then? Getting a permit for building a data center in Sweden is just normal industrial zoning that anyone can get for cheap, there is plenty of it. Only challenge is getting enough electricity.

I meant Europe is an example of how not to do regulation. The problem you just mentioned. If you get land easily electricity won't be available and vice versa.

Why is it an example? Can you cite any case where "regulation" trumpled the construction of a properly designed datacenter?

Or what you meant was "those poor billionaires can't do as they please with common resources of us all, and without any accountability"?

As a quick anecdote, there is a DC in construction in Portugal with a projected capacity of 1.2GW, powered by renewables.


> Not all law and regulation is created equal. Look at Europe.

You're spot on but you are not saying what you think you're saying)


Maybe, but I'm skeptical, because current DCs are not designed to minimize footprint. Has anyone even built a two-story DC? Obviously cooling is always an issue, but not, directly, land.

Now that I think of it, a big hydro dam would be perfect: power and cooling in one place.


> Has anyone even built a two-story DC?

Downtown Los Angeles: The One Wilshire building, which is the worlds most connected building. There are over twenty floors of data centers. I used Corporate Colo which was a block or two away. That building had at least 10 floors of Data Centers.


I think Downtown Seattle has a bunch too (including near Amazon campus). I just looked up one random one and they have about half the total reported building square footage of a 10-story building used for a datacenter: https://www.datacenters.com/equinix-se3-seattle

> Has anyone even built a two-story DC?

Every DC I’ve been in (probably around 20 in total) has been multi storey.


Skepticism is valid. The environmentalists came after dams too.

Amazon’s new campus in Indiana is expected to use 2.2GW when complete. 50Mw is nothing, and that’s ignoring the fact that most of that power wouldn't actually be used for compute.

Starlink provides a service that couldn't exist without the satellite infrastructure.

Datacenters already exist. Putting datacenters in space does not offer any new capabilities.


This is the main point I think. I am very much convinced that SpaceX is capbable to put a datacenter into space. I am not convinced they can do it cheaper than building a datacenter on earth.

> Isn't 50MW already by itself equivalent to the energy consumption of a typical hyperscaler cloud?

xAI’s first data center buildout was in the 300MW range and their second is in the Gigawatt range. There are planned buildouts from other companies even bigger than that.

So data center buildouts in the AI era need 1-2 orders of magnitude more power and cooling than your 50MW estimate.

Even a single NVL72 rack, just one rack, needs 120kW.


I ran the math the last time this topic camps up

The short answer is that ~100m2 of steel plate at 1400C (just below its melting point) will shed 50MW of power in black body radiation.

https://news.ycombinator.com/item?id=46087616#46093316


Which GPU runs at 1400C?

Because 10K satellites have a FAR greater combined surface area than a single space-borne DC would. Stefan-Boltzman law: ability to radiate heat increase to the 4th power of surface area.

It's linear to surface area, but 4th power to temperature.

Are starlink satellites in sun synchronous orbits? Doesn't constant solar heating change the energy balance quite a bit?

50MW is on the small side for an AI cluster - probably less than 50k gpus.

if the current satellite model dissipates 5kW, you can't just add a GPU (+1kW). maybe removing most of the downlink stuff lets you put in 2 GPUs? so if you had 10k of these, you'd have a pretty high-latency cluster of 20k GPUs.

I'm not saying I'd turn down free access to it, but it's also very cracked. you know, sort of Howard Hughesy.


High latency to earth but low latency (potentially) to other satellites.

50MW might be one aisle of a really dense DC. A single rack might draw 120kW.

5kW means you can't even handle a single one of these[0], compared to a handful per rack on an earthbound data centre.

0. https://www.arccompute.io/solutions/hardware/gpu-servers/sup...


Starlink satellites also radiate a non-trivial amount of the energy they consume from their phased arrays

> A Starlink satellite uses about 5K Watts of solar power. It needs to dissipate around that amount (+ the sun power on it) just to operate.

This isn't quite true. It's very possible that the majority of that power is going into the antennas/lasers which technically means that the energy is being dissipated, but it never became heat in the first place. Also, 5KW solar power likely only means ~3kw of actual electrical consumption (you will over-provision a bit both for when you're behind the earth and also just for safety margin).


> Why is starlink possible and other computations are not?

Aside from the point others have made that 50 MW is small in the context of hyperscalers, if you want to do things like SOTA LLM training, you can't feasibly do it with large numbers of small devices.

Density is key because of latency - you need the nodes to be in close physical proximity to communicate with each other at very high speeds.

For training an LLM, you're ideally going to want individual satellites with power delivery on the order of at least about 20 MW, and that's just for training previous-generation SOTA models. That's nearly 5,000 times more power than a single current Starlink satellite, and nearly 300 times that of the ISS.

You'd need radiator areas in the range of tens of thousands of square meters to handle that. Is it theoretically technically possible? Sure. But it's a long-term project, the kind of thing that Musk will say takes "5 years" that will actually take many decades. And making it economically viable is another story - the OP article points out other issues with that, such as handling hardware upgrades. Starlink's current model relies on many cheap satellites - the equation changes when each one is going to be very, very expensive, large, and difficult to deploy.


> 10th (or worse) best AI company

You might only care about coding models, but text is dominating the market share right now and Grok is the #2 model for that in arena rankings.


Grok is losing pretty spectacularly on the user / subscriber side of things.

They have no path to paying for their existence unless they drastically increase usage. There aren't going to be very many big winners in this segment and xAI's expenses are really really big.


I really wonder what will happen when the AI companies can no longer set fire to piles of investor money, and have to transition to profitability or at least revenue neutrality - as that would entail dramatically increasing prices.

Is the plan to have everyone so hopelessly dependent on their product that they grit their teeth and keep on paying?


The answer to this is very very simple.

Think about the stock return over a period - its composed of capital gains and dividends.

Now what happens capital gains disappears and perhaps turns into capital losses? Dividends have to go higher.

What does this mean? Less retained earnings / cashflows that can be re-invested.

Apple is the only one that will come out of this OK. The others will be destroyed for if they dont return cash, the cash balance will be discounted leading to a further reduction in the value of equity. The same thing that happened to Zuckerberg and Meta with the Metaverse fiasco.

Firms in the private sphere will go bust/acquired.


> Now what happens capital gains disappears and perhaps turns into capital losses? Dividends have to go higher

This is not how corporate finance works. Capital gains and losses apply to assets. And only the most disciplined companies boost dividends in the face of decline—most double down and try to spend their way back to greatness.


It'll be a combination of advertising and subscription fees, and there will only be a few big winners.

Gemini is practically guaranteed. With the ad model already primed, their financial resources, their traffic to endlessly promote Gemini (ala Chrome), their R&D capabilities around AI, their own chips, crazy access to training data, and so on - they'd have to pull the ultimate goof to mess up here.

Microsoft is toast, short of a miracle. I'd bet against Office and Windows here. As Office goes down, it's going to take Windows down with it. The great Office moat is about to end. The company struggles, the stock struggles, Azure gets spun off (unlock value, institutional pressure), Office + Windows get spun off - the company splits into pieces. The LLMs are an inflection point for Office and Microsoft is super at risk, backwards regarding AI and they're slow. The OpenAI pursuit as it was done, was a gigantic mistake for Microsoft - one of the dumbest strategies in the history of tech, it left them with their pants down. Altman may have killed a king by getting him to be complacent.

Grok is very unlikely to make it (as is). The merger with SpaceX guarantees its death as a competitor to GPT/Gemini/Claude, it's over. Maybe they'll turn Grok into something useful to SpaceX. More likely they'll slip behind and it'll die rapidly like Llama. The merger is because they see the writing on the wall, this is a bailout to the investors (not named Elon) of xAI, as the forced Twitter rollup was a bailout for the investors of Twitter.

Claude is in a weird spot. What they have is not worth $300-$500 billion. Can they figure out how to build a lot more value out of what they have today (and get their finances sustainable), before the clock runs out? Or do they get purchased by Meta, Microsoft, etc.

OpenAI has to rapidly roll out the advertising model and get the burn rate down to meaningless levels, so they're no longer dependent on capital markets for financing (that party is going to end suddenly).

Meta is permanently on the outside looking in. They will never field an in-house competitor to GPT or Gemini that can persistently keep up. Meta doesn't know what it is or why it should be trying to compete with GPT/Gemini/Claude. Their failure (at this) is already guaranteed. They should just acquire GPT 4o and let their aging userbase on FB endlessly talk itself into the grave for the next 30 years while clicking ads.

If Amazon knew what they were doing (they don't right now), they would: immediately split retail + ads and AWS. The ad business ensures that the retail business will continue to thrive and would be highly lucrative. Then have AWS purchase Anthropic when valuations drop, bolt it on to AWS everything. Far less of an anti-trust issue than if what is presently known as Amazon attempted it here and now. Anthropic needs to build a lot on to itself to sustain itself and justify its valuation, AWS already has the answer to that.

If valuations plunge, and OpenAI is not yet sustainable, Microsoft should split itself into pieces and have the Windows-Office division purchase OpenAI as their AI option. It'd be their only path to avoiding anti-trust blocking that acquisition. As is Microsoft would not be allowed to buy OpenAI. Alternatively Microsoft can take a shot at acquiring Anthropic at some point - this seems likely given the internal usage going on at Redmond, the primary question is anti-trust (but in this case, Anthropic is viewed as the #3, so Microsoft would argue it bolsters competition with GPT & Gemini).


"Gemini is practically guaranteed. With the ad model already primed, their financial resources, their traffic to endlessly promote Gemini (ala Chrome), their R&D capabilities around AI, their own chips, crazy access to training data, and so on - they'd have to pull the ultimate goof to mess up here"

Im not convinced on this TBH in the long-run. Google is seemingly a pure play technology firm that has to make products for the sake of it, else the technology is not accessible/usable. Does that mean they are at their core a product firm? Nah. Thats always been Apple's core thing, along side superior marketing.

One only has to compare Google's marketing of the Pixel phone to Apple - it does not come close. Nobody connects with Google's ads, the way they do with Apple. Google has a mountain to climb and has to compensate the user tremendously for switching.

Apple will watch the developments keenly and figure out where they can take advantage of the investments others have made. Hence the partnerships et al with Google.


Merging with SpaceX means they don't have to pay for their existence. Anyway they're probably positioned better than any other AI player except maybe Gemini.

I don’t follow why merging with SpaceX means they don’t have to pay for their existence. Someone does. Presumably now that is SpaceX. What is SpaceX’s revenue?

Maybe the idea is that SpaceX has access to effectively unlimited money through the US Government, either via ongoing lucrative contracts, or likely bailouts if needed. The US Govt wouldn't bail out xAI but they would bail out SpaceX if they are in financial trouble.

Plus government backstop. The federal government (especially the current one) is not going to let SpaceX fail.

Maybe not, but they might force it to sell at fire sale prices to another aerospace company that doesn't have the baggage.

xAI includes twitter? I thought twitter was just X?

xAI acquired twitter in 2025 as part of Musk's financial shell game (probably the same game he is playing with SpaceX/xAI now).

I found mistakes in the spreadsheet backing up 2 published articles (corporate governance). The (tenured Ivy) professor responded by paying me (after I’d graduated) to write a comprehensive working paper that relied on a fixed spreadsheet and rebutted the articles.

Integrity is hard, but reputations are lifelong.


Whether it not this is AI slop, it places a huge burden on everyone here to read it, which we will not. Please be thoughtful, and just stop.


This wasn't AI, sorry about that.


AI bot-like behaviour.


Generally you pack it into a crate and fly it. You may well need a Carnet and/or other paperwork, and it only makes sense for longer trips.

Consider buying a bike in Europe instead - generally cheaper and you can get something more appropriate to the terrain.


Talk to a VC as well. Or more than one, and ideally ones that are in the medical device space and have done similar deals with others from your university. If they are interested then they can apply pressure on the university too, to get to a solution that works for everyone.


2: is not optional as the pop-up has occurred and the interruption done, and 4-6: are not obvious nor easy for almost everyone.

I recommend people always respond with the lowest possible score (1, not 0) when presented with popups like this.


The New Zealand Active Investor Plus resident program requires $5m NZD, which is under $3m USD, but that would take everything. There is another program mooted where you buy a business for less than that.


NZ’s Active Investor Plus program is more like EB-5 than this. AIP requires that migrants invest their funds, not donate them. The Growth category requires fewer residency days and a NZ$5m (~US$3m) investment in “growth” companies or funds, including VC funds and companies that VC funds invest with. The Balanced category requires double the investment and has a wider range of asset classes, but also a longer duration and higher number of days of residency required.


Fun but scammy.

If you copy the generated url and put it into the entry field (and repeat) then you end up at a bitcoin site. As Bubblerings has pointed out that has malware.


> If you copy the generated url and put it into the entry field (and repeat) then you end up at a bitcoin site.

Uh, what? I just tried it a few times, and it seems to just follow the redirect each time, always ending up back at the original target URL I entered. How many times did you have to "repeat" to make that happen?

> As Bubblerings has pointed out that has malware.

No, that's not what BubbleRings said. BubbleRings said one site on VirusTotal reported it was malware. That sounds like a false positive because the URL is fishy, which is the entire point of the joke here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: