A "petty battle against CloudFlare DNS users"? No. They require a standardized DNS extension in order for their services to operate properly, and CloudFlare is waging a petty battle against the standard.
A "extension" is by definition something they do not require, since otherwise DNS clients written before that extension would not be able to interoperate with them. That's what makes it a "extension" rather than a unconsionable violation of backwards compatiblity.
And that particular extension exists solely as means for DNS proxies to violate the privacy of their users by leaking client identity data to upstream DNS servers. There are several reasons why Cloudflare is evil and needs to die (especially ReCaptcha and associated attacks on TOR), but archive.is is firmly in the wrong on this particular point.
For every application of FSD that applies to a lending library, it seems like there are 99 or more that apply to everyday lending, modification, repair, or resale concerns. Even if libraries didn't exist, I think we'd have something like first sale doctrine.
IMO this is a symptom of a larger problem with the entire web 'ecosystem' (HTML, CSS, JS primarily). It all tries (or tried, anyway) to be simple and easy and forgiving - which means that simple things are easy, and moderately complex things are insanely difficult.
I honestly think the only real solution is re-doing all of it 'from scratch', to make everything (a) stricter and (b) more normal/consistent.
(I think the extant transpilers/preprocessors tend to try to be too 'thin' of a wrapper around CSS/JS: they do give some nice quality of life features but still don't really clean up the mess that is JS and the DOM.)
It actually it is (mostly) as easy as it can be if you think about how the web lets one run third part code safely, not only as a sandboxed in the host environment but also from different domains. I've seen this space evolve since the days of DHTML and the best way to create dynamic content is with document.createElement. Actually I use the crelt[1] util, and the code is shorter and faster then any React/Vue virtual-dom templating system. It like finally riding a bicycle a without training wheels after 15 years of practice.
You can always take a multi-tenant system and convert it into a single-tenant system a lot more easily. First and foremost, you can simply run the full multi-tenant system with only a single tenant, which if nothing else enables progressive development (you can slowly remove those now-unnecessary WHERE clauses, etc).
See my sibling comment with the Hubspot example. Even though the system might work internally, other things will break if you start having duplicate account IDs because other systems don't think of the account ID as a cluster-local identifier, but as a global one.
Just thinking through this, but if it's an entirely separate environment, just host it on a separate subdomain and the account id becomes irrelevant. If you have a functioning staging environment, you already have the ability to run an isolated environment without issue, this is just a different application of the same idea.
You can probably run the environment itself, but other systems (release management, monitoring, billing, etc) probably rely on the account_id being unique.
I think you are talking about problems going from multiple single-tenant systems to a single multi-tenant system. You parent is talking about the opposite.
In the simple, YAGNI implementation of this, when you create a new HubSpot account, most likely that will insert a new row into the accounts table, and the auto generated ID of that row will be your account ID. Therefore you need uniqueness to be enforced at that level.
If you want to start running a separate copy of the system, you need to refactor the system to move that sequence out of the database so that two different customers running on different clusters don't end up with the same account ID. This is just an example, but there are many problems like this that are caused by the assumption that the production system is a single unique system.
Everything has a solution, but want to bet that at least 20 different internal systems at HubSpot assume that the account ID in that URL is globally unique?
In my experience by the time you reach this point you have a lot of operational complexity because you and your team are used to your production cluster being a single behemoth, so chances are it's not easy to stand up a new one or the overhead for doing so is massive (i.e. your production system grew very complex because there is rarely if ever a need to stand up a new one).
Additionally, a multi tenant behemoth might be full of assumptions that it's the only system in town therefore making it hard to run a separate instance (i.e. uniqueness constraints on names, IDs, etc).
Some of the issues I see in one of my projects is high interactivity between accounts. E.g. if account 1 'sends' something to account 2 both of the shared/separate db instances need to be up or there'll need to be some kind of queueing mechanism.
That's hard enough and then add to it that most clients want to BYOK to those instances
High interactivity between accounts is a good reason to not adopt the proposed multi-single-tenant architecture. The scenarios discussed are B2B enterprisey apps in which the environments are essentially independent from each other.
Open ports which are accessible to the Internet at large are not "machine internals". If you do not want someone to access your systems, then you should configure your systems to not allow that access.
They are open ports which are accessible to the Internet at large. Or at least, any site you go to. If you don't like that there are various means to close off those ports to your browsers (Windows firewall, network namespaces, etc).