I really don't get this. Most of these low cost providers actually re-rent GPUs from Azure, AWS or GCP (*), yet they offer much better on-demand pricing (as low as $3.8/hr for H100 sxm and $2.5/hr for PCIe).
And it's a fact that you can get even much better on-demand (not to mention reserved) pricing from the big clouds if you're a decent startup with connections.
If one of these clouds offered fair pricing to SMBs, it could be a great bottoms up growth strategy.
(*) Not LambdaLabs afaik, but they rarely have on demand capacity anyway, and you can only get reasonable price with 3 year reservation (which is, surprise surprise, more than the hardware cost).
RunPod is one of the providers that has some of the best on demand pricing (there are places where you can get H100 PCIe for less, but they typically don't have any capacity) without contracts and doesn't own hardware (or didn't as of few months ago).
The middlemen you're talking about do two things: buy lots of reserved compute on the hyperscalers, and then pit the hyperscalers against eachother to get better pricing.
If you're reserving thousands of GPUs from the same hyperscaler, even if they're the only cloud you run on, you're not paying the price shown in the calculator. If you have other suppliers, you'll get an even better deal. Then you resell that reserved compute as on-demand compute, somewhere between your costs and what your customers would pay a hyperscaler directly.
And it's a fact that you can get even much better on-demand (not to mention reserved) pricing from the big clouds if you're a decent startup with connections.
If one of these clouds offered fair pricing to SMBs, it could be a great bottoms up growth strategy.
(*) Not LambdaLabs afaik, but they rarely have on demand capacity anyway, and you can only get reasonable price with 3 year reservation (which is, surprise surprise, more than the hardware cost).