Hacker Newsnew | past | comments | ask | show | jobs | submit | akdev1l's commentslogin

The way to make kernel modules is to submit them to the kernel. Not really sure what a “universal kernel module” really is.

Also that seems irrelevant because it seems this was implemented in eBPF so no kernel modules are required.


> The way to make kernel modules is to submit them to the kernel.

Then it would need to be published as GPL, but with no guarantee that it will ever be accepted.

> Not really sure what a “universal kernel module” really is.

A .ko that can be loaded in a wide range of kernel versions.

> Also that seems irrelevant because it seems this was implemented in eBPF so no kernel modules are required.

And it has serious limitations. There's a chapter of them in the readme.


wdym?

OSX has literally always been supported only on very limited hardware so how would it support anything else?


did you read what this is about? support for a printer people buy in stores. the kinda thing people expect working?

Oh I thought you were referring to the VM part

Anyway Apple created CUPS so it should support anything Linux does when it comes to printing

edit: looks like they didn’t create it, they just hired the guy who did and shipped it


>I don't like free offerings, because what if they decide to charge someday? What if someone decides "free is not feasible, we start charging $20 per instance now".

You can just move to another provider at that point. At least when it comes to CDN and DNS there’s literally no vendor lock-in.

You can grab your dns records export them to csv and import somewhere else easily and a CDN is just a file server so you can just give your files to someone else easily.


> At least when it comes to CDN and DNS there’s literally no vendor lock-in.

ehhhh, really depends on which CDN features you're using, and at what volume. Using ESI? VCL? Signed URLs or auth? Any other custom functionality? Are you depending on your provider's bot management features which are "CONTACT FOR PRICE" with other providers? Does your CDN provider have a special egress deal with your cloud provider?

It's possible to picture this being easy in the same way that being multi-cloud or multi-region is easy.


>Using ESI? VCL? Signed URLs or auth? Any other custom functionality? Are you depending on your provider's bot management features which are "CONTACT FOR PRICE" with other providers?

I have no idea what two of those acronyms mean. None of this is part of what a CDN offers.

Yes if you use DDoS protection, or cloudfare’s ZeroTrust or embrace $X proprietary features then what I said no longer applies.

I strictly said DNS and CDN.


ESI = Edge Side Includes think Server Side Includes on a CDN technology as supported by Akamai and used by sites like Ikea to deliver a fast maintainable experience

VCL = Varnish Configuration Language i.e. how you configure your Fastly services

If you're just using a CDN as a proxy then there's no lock in but plenty of sites are using CDNs for much more than that


Can anyone say why this is being downvoted? Seems like it makes sense to me, but this isn't my area of expertise.

Predictability matters. The whole point of paying someone else to handle a problem for you is that you don't have to worry about it. If you go all in on a provider and then suddenly find out that you've been switched to a paid plan in the middle of your vacation, that's not a place anyone wants to be. Saying there's no lock-in is nice, but that overlooks the fact that there most definitely is friction. What if there's no mass export? No mass import? Or you need to reset 2FA? Or etc, there's a thousand things that can shoot you in the foot, especially if you have a lot of services you need to migrate.

It's impossible to generalize over free vs paid in regard to predictability. E.g. a provider I paid for simply disappeared once when I was quite busy while my old free gmail still works. Realistically CF's free tier is more predictable than many paid options on market.

My threat model here focuses on what the provider gets out of the free tier. Cloudflare gets a broad view into activity on the internet for building the models they use for their paid offerings. Free Gmail puts people on a path in to Google's ecosystem with basically zero marginal cost.

Or your provider randomly decides you need to be on an enterprise plan: https://robindev.substack.com/p/cloudflare-took-down-our-web...

>What if there's no mass export? No mass import? Or you need to reset 2FA?

1. For DNS we have standardized AXFR requests which the DNS provider needs to support as they are part of the DNS standard. There is not an option of not having that unless you have a really shitty provider that you should change anyway.

2. Same for Mass Import because again DNS already defines these things at the protocol level.

And resetting 2FA or whatever is just the cost of using any service

Personally I have used CF for ~10 years so I have saved $240 and I simultaneously use GitHub Pages and CF Pages for CDN because again I just need to give them a bunch of static files. Adding a third CDN provider would literally be a single command at the end of my build pipeline.


For personal projects, I'd rather just pay $2/month and not think about it than get hit with a random bill and scramble to migrate before the next month's bill. Bunny is perfect for this use case where you have a handful of projects that aren't all actively maintained. It just works without hand-holding, and since you're paying for the service, there's no rugpull looming.

Don't you still have to worry about big bills since bunny bills based on usage?

The biggest bill I've gotten from Bunny was like $10 when my app (https://atlasof.space) briefly went viral and got 100k+ views in a month. Bunny CDN is so reasonably priced and the realistic visitor ceiling for my projects is low enough that it's still negligible. The free->paid cliff is typically a lot steeper than this in my experience.

https://support.bunny.net/hc/en-us/articles/360000235911-How...

> Minimum Account Balance

> In order to keep your service online, you are required to keep a positive account credit balance. If your account balance drops low, our system will automatically send multiple warning emails. If despite that, you still fail to recharge your account, the system will automatically suspend your account and all your pull zones. Any data in your storage zones will also be deleted after a few days without a backup. Therefore, always make sure to keep your account in good standing.

You proactively replenish your balance, so in the worst case, you can just let the account go.


I used to handwave cloud portability. Turns out when you're shipping things and need extra services and you have deadlines, you build against the platform. I think the GP comment was probably expressing wariness of the free cloudflare tier that entices you to build against their APIs and their product shape in a way that inevitably locks you in. Sure, you could migrate, but that's expensive.

Yeah, good point. For a little hobbyist site of no importance, I'm not too worried about vendor lock-in, but that calculus changes as it gets more important.

That's the catch though. By time you're scaling, there's tension between roadway and revenue and headcount and it's the worst worst possible time to need to reachitect.

I didn't downvote it, but I don't think migrating away from Cloudflare workers, R2, D1, etc., isn't going to be that easy. Basically, the build these things from the ground up to work optimally for their infra - even the mental model that you have to use is different. If you only narrowly use one part of it, maybe.

>Cloudflare workers, R2, D1, etc., isn't going to be that easy.

And how is that related to me? My comment said (and the parent I replied to) mentioned DNS and CDN.

Now we add compute services, data storage, whatever D1 is and the other comment mentioned auth/authz

Are people not aware what CDN and DNS are?


“What about…?” Does not make for a good argument.

When the comment is a response to another that justifies current attacks on Iran because Irani proxies killed US, it matters a big F'ing deal that those were in retaliation of the US historically scuttling Iranian parliamentary democracy and killing 50K Iranians by way of chemical munitions alone through its proxy.

No it doesn’t.

The post was replying to this:

>Iran doesn't use any of these to attack America.

This is false, as the post explained.

Saying “what about the US attacking Iran?” does not change the above being false. In fact the US attacking Iran does not change the above false either.

Even if we accept both things as true:

1. Iran has historically attacked the US 2. The US has historically destabilized/attacked Iran

It doesn’t change the fact that “Iran does not use any of these (proxy groups) to attack America” is a false statement.

Skip me with your emotional arguments because I’ll just think you’re posturing and just trying to advance your agenda :-)


There is a difference between "attack" (that has a connotation of being unprovoked and in bad faith) and "retaliation" against acts of drawing first blood.

More so if those primary attacks had 50K killed by way of proxies (100K according to more realistic estimates).

Sometimes, what one dishes out, comes back. If it does, rest of the world thinks it is only fair. Yes Iran has been retaliating, very weakly, to counterbalance attacks on itself by the US and its proxies.

There is not much doubt on who acted in bad faith first.

The US hurting its toe by kicking a stone and then complaining that it is the stone that attacked is not a good argument.


It's like saying "A home owner shot armed burglar in self defense," then crying "self defense is whataboutism! That home owner needs to face mob justice!" Nah. Those tactics of yours simply do not work anymore. Everyone sees through it. Iran wouldn't be doing any of this if they weren't constantly being bombed and attacked and having their leaders assassinated. The strait was open 2 months ago and had no issues. Two countries decided to ruin that and they deserve to face the consequences.

The fundamental issue is that installers shouldn’t exist

There’s no need to have an executable program just to essentially unzip some files to disk


>There’s no need to have an executable program just to essentially unzip some files to disk

What if you need to install some registry keys? What about installing shared dependencies (redistributables)? What if you want granny to install your app and left to her own devices, it'll end up in some random folder in the downloads folder?


Then you can give the tokens to whoever you want

that does happen with alcohol but it’s rare because well people don’t wanna go to jail for giving alcohol to a minor

So your proposal would have to come with liability towards the individual


Yes - pretty much the same as supplying tobacco/alcohol to minors. My point is that we've got a system which more or less works already, so it's just a matter of extending it for adult website verification.

seems a lot of people already consumed this as truth.

In the meantime a FOSS maintainer who is just trying to put the pieces in place to comply with the law (as written) got doxxed and harassed.

I hate it here


> In the meantime a FOSS maintainer who is just trying to put the pieces in place to comply with the law (as written) got doxxed and harassed.

In my experience, when a country like Britain passes a censorship law, people in other countries like America don't enjoy being given the tools to comply with it, even if the tools are entirely optional.


The main thing that caused this ruckus was law passed in California not the UK

not that it matters because doxxing and harassing developers is not acceptable.


You need to very specific and also question the output if it does something insane

I've found it's less about specificity and more about removing the # of critical assumptions it needs to make. Being too specific can be a hindrance in it's own regard.

And that's also a decent barometer for what it's good at. The more amount of critical assumptions AI needs to make, the less likely it is to make good ones.

For instance, when building a heat map, I don't have to get specific at all because the amount of consequential assumptions it needs to make is slim. I don't care or can change the colors, or the label placement.


This decade’s version of “works on my box”

MacOS handles memory pressure better than Linux imo (at least for interactive use cases)

I have seen MacOS overcommit up to 50% of memory and still have the system be responsive.

Yesterday I filled up my ram accidentally on Fedora and even earlyoom took several minutes to trigger and in the meantime the system was essentially non-responsive


The plural of 'anecdote' is not 'data'.

It's exactly what it is

How do you think data is created? It's lots of anecdotes, normalised.


macOS uses solid-state drives to do swap to help increase virtual memory. I can run multiple browsers and IDEs smoothly on my 8GB MacBook.

This is with earlyoom/systemd-oomd enabled ?

From my experience it does not help much, and I still get occasional freezes when a program misbehaves on Linux. It’s not a huge problem, but it is a problem and it exists; I have been dealing with it for about 15 years with no significant improvement.

The earlyoom/oomd changes are quite recent.. I've had a 'better' experience, but I guess it's not really fixed yet.

Yeah, Fedora ships systemd-oomd

It did eventually work to but it took a while. It also did not killed the culprit runaway processes somehow but it did kill enough stuff for me to regain control of the system.


>Windows 95 worked around this by keeping a backup copy of commonly-overwritten files in a hidden C:\Windows\SYSBCKUP directory. Whenever an installer finished, Windows went and checked whether any of these commonly-overwritten files had indeed been overwritten.

This is truly unhinged. I wonder if running an installer under wine in win95 mode will do this.


This is truly unhinged

Granted, but at the same time it's also resolutely pragmatic.

Apparently there was already lots of software out there which expected to be able to write new versions of system components. As well as buggy software that incidentally expected to be able to write old versions, because its developers ignored Microsoft's published best practices (not to mention common sense) and and didn't bother to do a version comparison first.

The choice was to break the old software, or let it think it succeeded then clean up after the mess it made. I'd bet they considered other alternatives (e.g. sandbox each piece of software with its own set of system libraries, or intercept and override DLL calls thus ignoring written files altogether) but those introduce more complexity and redirection with arguably little benefit. (I do wonder if the cleanup still happens if something like an unexpected reboot or power loss happens at exactly the wrong time).

Could the OS have been architected in a more robust fashion from the get-go? Of course.

Could they have simply forbidden software from downgrading system components? Sure, but it'd break installers and degrade the user experience.

Since the OS historically tolerated the broken behavior, they were kind of stuck continuing to tolerate it. One thing I learned leading groups of people is if you make a rule but don't enforce it, then it isn't much of a rule (at least not one you can rely on).

I would argue the deeper mistake was not providing more suitable tooling for developers to ensure the presence of compatible versions of shared libraries. This requires a bit of game theory up front; you want to always make the incorrect path frictiony and the correct one seamless.


There was (and still is) VerInstallFile, however this was introduced in Windows 3.1 and it is possible installers wanted to also support Windows 3.0 (since there wasn't much of a time gap between the two many programs tried to support both) so they didn't use it.

It is important to remember that Microsoft created some of this chaos to begin with. Other aspects can be attributed to "the industry didn't understand the value of $x or the right way to do $y at the time". And some of this is "nonsense you deal with when the internet and automatic updates is not yet a thing".

Why did programs overwrite system components? Because Microsoft regularly pushed updates with VC++ or Visual Studio and if you built your program with Microsoft's tools you often had to distribute the updated components for your program to work - especially the Visual C runtime and the Common Controls. This even started in the Win3.11 days when you had to update common controls to get the fancy new "3d" look. And sometimes a newer update broke older programs so installers would try to force the "correct" version to be installed... but there's no better option here. Don't do that and the program the user just installed is busted. Do it and you break something else. There was no auto-update or internet access so you had to make a guess at what the best option was and hope. Mix in general lack of knowledge, no forums or Stack Overflow to ask for help, and general incompetence and you end up with a lot of badly made installers doing absolute nonsense.

Why force everyone to share everything? Early on primarily for disk space and memory reasons. Early PCs could barely run a GUI so few hundred kilobytes to let programs have their own copy of common controls was a non-starter. There was no such thing as "just wait for everyone to upgrade" or "wait for WindowsUpdate to roll this feature out to everyone". By the early 2000s the biggest reason was because we hadn't realized that sharing is great in theory but often terrible in practice and a system to manage who gets what version of each library is critical. And we also later had the disk space and RAM to allow it.

But the biggest issue was probably Microsoft's refusal to provide a system installer. Later I assume antitrust concerns prevented them from doing more in this area. Installers did whatever because there were a bunch of little companies making installers and every developer just picked one and built all their packages with it. Often not updating their installer for years (possibly because it cost a lot of money).

Note: When I say "we" here that's doing a lot of heavy lifting. I think the Unix world understood the need for package managers and control of library versions earlier but even then the list of problems and the solutions to them in these areas varied a lot. Dependency management was far from a solved problem.


> This is truly unhinged.

This is bog-standard boring stuff (when presented with a similar problem, Linux invented containers lol) - read some of his other posts to realize the extent Microsoft went to maintain backwards compatibility - some are insane, some no doubt led to security issues, but you have to respect the drive.


It’s not bog-standard. Containers are not equivalent to doing what is described in the article.

Containers are in fact redirecting writes so an installer script could not replace system libraries.

The equivalent would be a Linux distro having the assumption that installer scripts will overwrite /usr/lib/libopenssl.so.1 with its own version and just keeping a backup somewhere and copying it back after the script executes.

No OS that I know of does that because it’s unhinged and well on Linux it would probably break the system due to ABI compatibility.

If they had taken essentially the same approach as wine and functionally created a WINEPREFIX per application then it would not be unhinged.

edit: also to be clear, I respect their commitment to backwards compatibility which is what leads to these unhinged decisions. I thoroughly enjoy Raymond Chen’s dev blog because of how unhinged early windows was.


Man, after looking at the veritable pile of stinking matter that is claude code, compare it with the NT 4 source leak.

Windows may have suffered its share of bad architectural decisions, but unhinged is a word that I wouldn't apply to their work on Windows.


I think you guys read “unhinged” as way more negative than I meant.

Just because I am saying it’s unhinged doesn’t mean I don’t think it’s cool

I’ve never read any windows source so I can still contribute to wine but I’ve read the NT kernel is really high quality


It's easy to forget in these discussions that Microsoft didn't have infinity resources available when writing Windows, and often the dodgy things apps were doing only became clear quite late in the project as app compatibility testing ramped up. Additionally, they had to work with the apps and history they had, they couldn't make apps work differently.

You say, oh, obviously you just should redirect writes to a shadow layer or something (and later Windows can do that), but at the time they faced the rather large problem that there is no formal concept of an installer or package in Windows. An installer is just an ordinary program and the OS has no app identity available. So, how do you know when to activate this redirection, and what is the key identifying the layer to which redirects happen, and how do you handle the case where some writes are upgrades and others are downgrades, etc, and how do you do all that in a short amount of time when shipping (meant literally in those days) will start in just a few months?


I mean it looks like they did try to redirect writes somehow. They probably tried more sane options until they arrived here.

>there is no formal concept of an installer or package in Windows.

this one is on them, I think package managers already existed - doesn’t seem like there was ever a blocker for windows to have a package manager but Microsoft never bothered until very recently


With hindsight sure, but I don't think any desktop operating systems had package managers in that era. macOS certainly didn't. NeXTStep had their .app bundle concept, but no legacy. And UNIX package managers were of no use - few of them properly supported third party packages distributed independently of the OS vendor, especially not ones that could upgrade the OS itself.

Windows 95 was not Windows NT and it still used the FAT32 file system, where it was not really possible to enforce access rights.

As TFA says:

You even had installers that took even more extreme measures and said, “Okay, fine, I can’t overwrite the file, so I’m going to reboot the system and then overwrite the file from a batch file, see if you can stop me.”


Well and the earliest versions of Windows 95 used FAT16 (specifically VFAT for support for LFNs or long file names). So enjoy those ridiculous cluster sizes if your hard disk even approached a gig or so.

That's because windows locks the file name vs inode like Unix, so was really the only way to update any file in use like libraries.

You are right that it’s not equivalent, but the article explains why redirecting the writes wasn’t a viable option.

> If they had taken essentially the same approach as wine and functionally created a WINEPREFIX per application then it would not be unhinged.

Man, wouldn't it have been nice if everyone had enough hard drive space in those days in order to do something like that...


Two words: proprietary installers.

If an installer expects to be able to overwrite a file and fails to do so, it might crash, leaving the user with a borked installation.

Of course you can blame the installer, but resolution of the problem might take a long time, or might never happen, depending on the willingness of the vendor to fix it.


> If . . . the replacement has a higher version number than the one in the SYSBCKUP directory, then the replacement was copied into the SYSBCKUP directory for safekeeping.

This as well. I know there are a million ways for a malicious installer to brick Win95, but a particularly funny one is hijacking the OS to perpetually rewrite its own system components back to compromised version number ∞ whenever another installer tries to clean things up.


Whats unhinged about a periodic integrity check? Doesn't seem much different than a startup/boot check. If you're talking about security, you've come to the wrong OS.

Then blindly overwriting the shared libraries despite the guidance what the vendor of the OS provides is actually hinged, yes?

You'd have to track down some 16bit Win3.x software to install. Probably on floppy disks since CD-ROMs weren't common.

I agree, it's unhinged for applications to overwrite newer versions of system files with older ones.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: