I had a machine (an AMD 3700X with 32 GB of RAM and a fast NVMe SSD) on which I used to run Debian. Then about 2.5 years ago I bought a new one and gave my wife the 3700X: I figured out she'd be more at ease so I installed Ubuntu on it.
I couldn't understand why everything was that slow compared to Debian and didn't want to bother looking into it so...
After a few weeks: got rid of Ubuntu, installed her Debian. A simple "IceWM" WM (I use the tiling "Awesome WM" but that's too radical for my wife) and she loves it.
She basically manages her two SMEs entirely from a browser: Chromium or Firefox (but a fork of Firefox would do too).
It works so well since years now that for her latest hire she asked me to set her with the same config. So she's now got one employee on a Debian machine with the IceWM WM. Other machines are still on Windows but the plan is to only keep one Windows (just in case) and move the other machines to Debian too.
Unattended upgrades, a trivial firewall "everything OUT or IN but related/established allowed" and that's it.
I had used ubuntu back in the day, and when I came back to linux a bit ago I immediately installed it again.
I don't remember all of my frustrations, but I remember having a lot of trouble with snap. Specifically, it really annoyed me that the default install of firefox was the snap version instead of native. I want that to be an opt-in kind of thing. I found that flatpak just worked better anyway.
I almost tried making the switch to arch, but I've been pretty happy running debian sid (unstable) since. The debian installer is just more friendly to me for getting encrypted drives and partitions set up how I want.
It's not for everyone, but I like the structured rolling updates of sid and having access to the debian ecosystem too much to switch to something else at this point.
I use sway with a radeon card for my primary and have a secondary nvidia card for games and AI stuff.
> I wonder what adaptations will be necessary to make AIs work better on Lisp.
Some are going to nitpick that Clojure isn't as lispy as, say, Common Lisp but I did experiment with Claude Code CLI and my paid Anthropic subscription (Sonnet 4.6 mostly) and Clojure.
It is okay'ish. I got it to write a topological sort and pure (no side effect) functions taking in and returning non-totally-trivial data structures (maps in maps with sets and counters etc.). But apparently it's got problems with...
... drumroll ...
The number of parentheses. It's so bad that the author of figwheel (a successful ClojureScript project) is working on a Clojure MCP that fixes parens in Clojure code spoutted by AI (well the project does more than that, but the description literally says it's "designed to handle Clojure parentheses reliably").
You can't make that up: there's literally an issue with the number of closing parens.
Now... I don't think giving an AI access to a Lisp REPL and telling it: "Do this by bumping on the guardrails left and right until something is working" is the way to go (yet?) for Clojure code.
I'm passing it a codebase (not too big, so no context size issue) and I know what I want: I tell it "Write a function which takes this data structure in and that other parameter, the function must do xxx, the function must return the same data structure out". Before that I told it to also implement tests (relatively easy for they're pure functions) for each function it writes and to run tests after each function it implements or modify.
There was a thread about this the other day [1]. It's the same issue as "count the r's in strawberry." Tokenization makes it hard to count characters. If you put that string into OpenAI's tokenizer, [2] this is how they are grouped:
Token 1: ((((
Token 2: ()))
Token 3: )))
Which of course isn't at all how our minds would group them together in order to keep track of them.
This is mostly because people wrongly assume that LLMs can count things. Just because it looks like it can, doesn't mean it is.
Try to get your favourite LLM to read the time from a clock face. It'll fail ridiculously most of the time, and come up with all kinds of wonky reasons for the failures.
It can code things that it's seen the logic for before. That's not the same as counting. That's outputing what it's previously seen as proper code (and even then it often fails. Probably 'cos there's a lot of crap code out there)
But for lisp, a more complex solution is needed. It's easy for a human lisp programmer to keep track of which closing parentheses corresponds to which opening parentheses because the editor highlights parentheses pairs as they are typed. How can we give an LLM that kind of feedback as it generates code?
Try asking an LLM a question like "H o w T o P r o g r a m I n R u s t ?" - each letter, separated by spaces, will be its own token, and the model will understand just fine. The issue is that computational cost scales quadratically with the number of tokens, so processing "h e l l o" is much more expensive than "hello". "hello" has meaning, "h" has no meaning by itself. The model has to waste a lot of computation forming words from the letters.
Our brains also process text entire words at a time, not letter-by-letter. The difference is that our brains are much more flexible than a tokenizer, and we can easily switch to letter-by-letter reading when needed, such as when we encounter an unfamiliar word.
I am lazy: when an LLM messes up parenthesis when working with any Lisp language I just quickly fix the mismatch myself rather than try to fix the tooling.
Sometimes LLMs astonish me with what the code they can write. Other times I have to laugh or cry.
As an example, I asked claude 3.5 back when that was the latest to indent all the code in my file by four more spaces. The file was about 700 lines long. I got a busy spinner for two minutes then it said, "OK, first 50 lines done, now I'll do the rest" and got another busy spinner and it said, "this is taking too long. I'm going to write a program to do it", which of course it had no problem doing. The point is that it is superhuman at some things and completely brain-dead about others, and counting parens is one of those things I wouldn't expect it to be good at.
I think LLMs are great at compression and information retrieval, but poor at reasoning. They seem to work well with popular languages like Python because they have been trained with a massive amount of real code. As demonstrated by several publications, on niche languages their performance is quite variable.
That was me at the time kicking the tires to understand what it was good at or not. If I actually wanted to indent a file by four spaces it would take me less time in my editor than to prompt the LLM to do it, even if the LLM had been capable of it.
I had that issue with the AI doing some CL dabbling.
Things, on the whole, were fine, save for the occasional, rogue (or not) parentheses.
The AI would just go off the rails trying to solve the problem. I told it that if it ever encountered the problem to let me know and not try to fix it, I’d do it.
AIUI in that thread they're saying "0.51x" the perf on a 96-core arm64 machine and they're also saying they cannot reproduce it on a 96-core amd64 machine.
So it's not going to affect everybody both running PostgreSQL and upgrading to the latest kernel. Conditions seems to be: arm64, shitloads of core, kernel 7.0, current version of PostgreSQL.
That is not going to be 100% of the installed PostgreSQL DBs out there in the wild when 7.0 lands in a few weeks.
It's a huge issue of ARM based systems, that hardly anyone uses or tests things on them (in production).
Yes, Macs going ARM has been a huge boon, but I've also seen crazy regressions on AWS Graviton (compared to how its supposed to perform), on .NET (and node as well), which frankly I have no expertise or time digging into.
Which was the main reason we ultimately cancelled our migration.
I'm sure this is the same reason why its important to AWS.
Macs are actually part of pain point with ARM64 Linux, because the Linux arm set er tend to use 64 kB pages while Mac supports only 4 and 16, and it causes non trivial bugs at times (funnily enough, I first encountered that in a database company...)
Yes, I did reproduce it (to a much smaller degree, but it's just a 48c/96t machine). But it's an absurd workload in an insane configuration. Not using huge pages hurts way more than the regression due to PREEMPT_LAZY does.
With what we know so far, I expect that there are just about no real world workloads that aren't already completely falling over that will be affected.
So why does it happen only with hugepages? Is the extra overhead / TLB pressure enough to trigger the issue in some way? Of is it because the regular pages get swapped out (which hugepages can't be)?
I don't fully know, but I suspect it's just that due to the minor faults and tlb misses there is terrible contention with the spinlock, regardless of the PREEMPT_LAZY when using 4k pages (that easily reproducible). Which is then made worse by preempting more with the lock held.
So perhaps this is a regression specifically in the arm64 code, or said differently maybe it’s a performance bug that has been there for a long time but covered up by the scheduler part that was removed?
Turns out the amd machine had huge tables enabled and after disabling those the regression was there on and too. So arm vs amd was a red herring.
Of course not a nice regression but you should not run PostgreSQL on large servers without huge pages enabled so thud regression will only hurt people who have a bad configuration. That said I think these bad configurations are common out there, especially in containerized environments where the one running PostgreSQL may not have the ability to enable huge pages.
That should be obvious to anyone who read the initial message. The regression was caused by a configuration change that changed the default from PREEMPT_NONE to PREEMT_LAZY. If you don’t know what those options do, use the source. (<https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...>)
Yes, I had a good laugh at that. It might technically be a regression, but not one that most people will see in practice. Pretty weird that someone at Amazon is bothering to run those tests without hugepages.
I doubt they explicitly said "I'll run without huge pages, which is an important AWS configuration". They probably just forgot a step. And "someone at Amazon" describes a lot of people; multiply your mental probability tables accordingly.
The number of people at Amazon is pretty much irrelevant; the org is going to ensure that someone is keeping an eye on kernel performance, but also that the work isn’t duplicative.
Surely they would be testing the configuration(s) that they use in production? They’re not running RDS without hugepages turned on, right?
> The number of people at Amazon is pretty much irrelevant; the org is going to ensure that someone is keeping an eye on kernel performance, but also that the work isn’t duplicative.
I'd guess they have dozens of people across say a Linux kernel team, a Graviton hardware integration team, an EC2 team, and a Amazon RDS for PostgreSQL team who might at one point or another run a benchmark like this. They probably coordinate to an extent, but not so much that only one person would ever run this test. So yes it is duplicative. And they're likely intending to test the configurations they use in production, yes, but people just make mistakes.
True; to err is human. But it is weird that they didn’t just fire up a standard RDS instance of one or more sizes and test those. After all, it’s already automated; two clicks on the website gets you a standard configuration and a couple more get you a 96c graviton cpu. I just wonder how the mistake happened.
No… I’m assuming that they didn’t use the same automation that creates RDS clusters for actual customers. No doubt that automation configures the EC2 nodes sanely, with hugepages turned on. Leaving them turned off in this benchmark could have been accidental, but some accident of that kind was bound to happen as soon as the tests use any kind of setup that is different from what customers actually get.
You're again assuming that having huge pages turned on always brings the net benefit, which it doesn't. I have at least one example where it didn't bring any observable benefit while at the same time it incurred extra code complexity, server administration overhead, and necessitated extra documentation.
It is a system-wide toggle in a sense that it requires you to first enable huge-pages, and then set them up, even if you just want to use explicit huge pages from within your code only (madvise, mmap). I wasn't talking about the THP.
When you deploy software all around the globe and not only on your servers that you fully control this becomes problematic. Even in the latter case it is frowned upon by admins/teams if you can't prove the benefit.
Yes, there are workloads where huge-pages do not bring any measurable benefit, I don't understand why would that be questionable? Even if they don't bring the runtime performance down, which they could, extra work and complexity they incur is in a sense not optimal when compared to the baseline of not using huge-pages.
For production Postgres, i would assume it’s close to almost no effect?
If someone is running postgres in a serious backend environment, i doubt they are using Ubuntu or even touching 7.x for months (or years). It’ll be some flavor of Debian or Red Hat still on 6.x (maybe even 5?). Those same users won’t touch 7.x until there has been months of testing by distros.
Ubuntu is used in many serious backend environments. Heroku runs tens of thousands (if not more) instances of Ubuntu on its fleet. Or at least it did through the teens and early 2020s.
and they are right, this is because a lot of junior sysadmins believe that newer = better.
But the reality:
a) may get irreversible upgrades (e.g. new underlying database structure)
b) permanent worse performance / regression (e.g. iOS 26)
c) added instability
d) new security issues (litellm)
e) time wasted migrating / debugging
f) may need rewrite of consumers / users of APIs / sys calls
g) potential new IP or licensing issues
etc.
A couple of the few reasons to upgrade something is:
a) new features provide genuine comfort or performance upgrade (or... some revert)
b) there is an extremely critical security issue
c) you do not care about stability because reverting is uneventful and production impact is nil (e.g. Claude Code)
but 99% of the time, if ain't broke, don't fix it.
On the other hand, I suspect LLMs will dramatically decrease the window between a vulnerability being discovered and that vulnerability being exploited in the wild, especially for open-source projects.
Even if the vulnerability itself is discovered through other means than by an LLM, it's trivial to ask a SOTA model to "monitor all new commits to project X and decide which ones are likely patching an exploitable vulnerability, and then write a PoC." That's a lot easier than finding the vulnerable itself.
I won't be surprised if update windows (for open source networked services) shrink to ~10 minutes within a year or two. It's going to be a brutal world.
Too often I see IT departments use this as an excuse to only upgrade when they absolutely have to, usually with little to no testing in advance, which leaves them constantly being back-footed by incompatibility issues.
The idea of advanced testing of new versions of software (that they’ll be forced to use eventually) never seems to occur, or they spend so much time fighting fires they never get around to it.
I’ve seen more 5k+-core fleets running Ubuntu in prod than not, in my career. Industries include healthcare, US government, US government contractor, marketing, finance.
I'd say about 2/3 of the places I've worked started on Linux without a Windows precedent other than workstations. I can't speak for the experience of the founding staff, though; they might have preferred Ubuntu due to Windows experience--if so, I'm curious as to why/what those have to do with each other.
That said, Ubuntu in large production fleets isn't too bad. Sure, other distros are better, but Ubuntu's perfectly serviceable in that role. It needs talented SRE staff making sure automation, release engineering, monitoring, and de/provisioning behave well, but that's true of any you-run-the-underlying-VM large cloud deployment.
"While living in the United States, she promoted Iranian regime propaganda, celebrated attacks against American soldiers and military facilities in the Middle East, praised the new Iranian Supreme Leader, denounced America as the “Great Satan,” and voiced her unflinching support for the Islamic Revolutionary Guard Corps, a designated terror organization."
What a bunch of great individuals! The UN warned, two days ago, that in 2026 the Islamic guards in Iran executed more people already than the average in a year. Some 600 people. That's not even talking about the 30 000+ who were slaughtered for manifesting against the regime.
Does anyone have anything to say to criticize ICE about those removals? I mean, really: what is the argument of those saying that ICE shouldn't deport those who, on US soil, hail the great deeds of islamists terrorists?
That is an important piece of context that isn't mentioned in the linked article. Instead, the state department makes it seem like they were kicked out for espousing opinions that the government doesn't like.
Studies related to vitamin D supplementation are the most important because in many cases the correlation between low vit D and "lots of bad stuff" is established. But what's not always known is if supplementation helps or not.
Now as I'm a simple man and as I know that there aren't side-effects to vit D supplementation [1], I take my supplements.
[1] except in the mind of intellectually dishonest people who shall take that one case of a person who took 10 000 IU of vit D per day for 10 years and ended having this or that (non-life threatening) issue. But these intellectually dishonest people would have no problem telling you to "be careful while drinking water, for one person who drank 20 liters of water per day got into trouble!", so do like me: ignore these pharma-lab paid shills.
Of course adequate vitamin D3 softgel supplementation helps, for the simple reason that it almost always effectively raises the blood level of vitamin D, thereby maintaining its sufficiency. Those who struggle with such simple logic will struggle hard in life.
Elderly people or those with heavy sunlight exposure or unhealthy kidneys or magnesium deficiency can exceed the target blood range of vitamin D even with 5K IU of vitamin D3 per day. I consider anything over 60 ng/mL to be in a strict excess of the target range.
The auto-dubbing drives me crazy: I have to reach for the settings to prevent it from happening.
However there's something I'd love and I think we're nearly there (heck, I could probably implement myself by now with all the models we have): real-time accent change. Same language, same sentences, but fix the accent.
There are some english-accent, arguably proper ones (but still unlistenable to), which I simply cannot stand. My daughter happens to be very good at imitating that accent (FWIW she's only ever been to british schools, so she speaks with a lovely british accent) and she knows I cannot stand it. But it comes from a very big country, with many people I admire: a great culture and they make a lot of incredibly helpful vids on Youtube, which I watch all the time. Lots of DIY stuff. But the accent: it's killing me. It's so thick, so bad, so unlistenable to that I'd wish I could have an option to, say: "re-dub that Youtube with a Texan accent" (because why not) or "re-dub that Youtube vid with a british accent". Really: if you're from that country, lots of peace and hi to my friends native from that country. But your english accent basically sucks even more than my french accent: it's totally unlistenable.
Please, for the sake of humanity, help us automatically re-dub english vids made by native french speakers like myself or native speakers from India.
> Why on earth would you install something like that has access to your entire machine, even if it is a separate one which has the potential to scan local networks?
I'd say that it's a given that we live in a world when your LAN is infested with compromised and hostile devices: from phones (spying devices) to home automation (spying chinese webcams) to TVs (with the TV's microphone listening 24/7 to everything people are saying) to chinese routers (which, yup, have backdoors for the chinese state) to that corean soundbar to really whatever enshittied device the world of enshittified turds we live in can come up with.
It is a fact of life that compromised, insecure, backdoored and at times all three of these shall find their way to our homes and appartments...
And it shouldn't be an issue.
What I mean by this: machines could be scanning my local networks and even maybe determine that this box at this IP is running Linux and... It still should be able to do exactly jack fucking shit with that information.
We must all learn to secure our devices for the Internet of Insecure and Enshittified Things is moving forward at godspeed. And if you think OpenClaw on its own device on your LAN is bad, wait until all the companies that were already selling enshittifed devices since years realize they'll now be able to enshittify those even more by slapping OpenClaw (or the equivalent) on their devices.
These insecure turds are all going to get a big boost of insecuredness, this time AI powered.
I'd say: bring it on. I'm ready. We all should be.
> On a 34" ultrawide monitor, it was too easy to put YouTube running on the left side
This has zero to do with an ultra-wide monitor and all to do with a lack of self-discipline.
I bought one of the first 38" ultra-wide monitor that came out from LG and, ten years later, I'm still rocking on it every day.
You know what? My main computer doesn't even have sound. You read that correctly. No sound. So no Youtube vids. No games. Not that I'd be tempted: but because I've got actually zero need for sound on that machine.
And I'm no luddite: I've got two servers at home, more in datacenters, countless Pi's, NUCs, and laptops. But on my work machine: it is no sound and a 38" ultra-wide.
If you need to use a monitor the size of a stamp to make sure you can't run youtube vids at the same time you're working, the issue is you, not the monitor.
I couldn't understand why everything was that slow compared to Debian and didn't want to bother looking into it so...
After a few weeks: got rid of Ubuntu, installed her Debian. A simple "IceWM" WM (I use the tiling "Awesome WM" but that's too radical for my wife) and she loves it.
She basically manages her two SMEs entirely from a browser: Chromium or Firefox (but a fork of Firefox would do too).
It works so well since years now that for her latest hire she asked me to set her with the same config. So she's now got one employee on a Debian machine with the IceWM WM. Other machines are still on Windows but the plan is to only keep one Windows (just in case) and move the other machines to Debian too.
Unattended upgrades, a trivial firewall "everything OUT or IN but related/established allowed" and that's it.
reply