I went to install "moltbot" yesterday, and the binary was still "clawdbot" after installation. Wonder if they'll use Moltbot to manage the rename to OpenClaw.
A lot of this felt very familiar. Having multiple plans does seem like a good way to hedge against the unknown, but I can also see that you'd end up with the "secret 5th" plan when all of those unknowns eventually stack up.
Planning is inaccurate, frustrating, and sadly necessary.
I'd see this as coming down to incentive. If you can scrape naively and it's cheap, what's the benefit to you in doing something more efficient for git forge? How many other edge cases are there where you could potentially save a little compute/bandwidth, but need to implement a whole other set of logic?
Unfortunately, this kind of scraping seems to inconvenience the host way more than the scraper.
Another tangent: there probably are better behaved scrapers, we just don't notice them as much.
Given they've been essentially subsidizing self-hosted orgs for a while, I'm kinda surprised they didn't do this before now. Probably wanted to lead with the price cut for everyone else.
It'll be interesting to see how this affects third party companies providing GitHub runners.
reply