We should be very concerned for the next generation. When you have the constant temptation of digging yourself out of a problem just by asking an LLM how will you ever learn anything?
My biggest lessons were from hours of pain and toil, scouring the internet. When I finally found the solution, the dopamine hit ensured that lesson was burned into my neurons. There is no such dopamine hit with LLMs. You vaguely try to understand what it’s been doing for the last five minutes and try to steer it back on course. There is no strife.
I’m only 24 and I think my career would be on a very different path if the LLMs of today were available just five years ago.
> We should be very concerned for the next generation. When you have the constant temptation of digging yourself out of a problem just by asking an LLM how will you ever learn anything?
This is just the same concern whenever a new technology appears.
* Socrates argued that writing would weaken memory, that it would create only superficially knowledge but incapable of really understanding. But it didn't destroy it. It allowed to store information and share it with many others far away.
* The internet and web indexers made information instantly accessible, allowing you to search for the information you just need, the fear is that people would just copy from the internet, yet researching information became way faster, any one with Internet access could access this information and learn themselves, just look at the amount of educational websites with courses to learn.
Each time a new technology came and people feared that it could degrade knowledge, the tools only helped us to increase our knowledge.
Just like with books and the internet, people could simply copy and not learn anything, its not exclusive to LLMs. The issue isn't in the tool itself, but how we use it. The new generation will probably instead of learning how to search, they will need to learn how to prompt, ask and evaluate whether the LLM isn't hallucinating or not.
Ok imagine you went back 30 years and you had a swarm of experts around you who you could ask anything you wanted and they would even do the work for you if you wanted.
Does this mean youd be incapable of learning anything? Or could you possibly learn way more because you had the innate desire to learn and understand along with the best tool possible to do it?
Its the same thing here. How you use LLMs is all up to your mindset. Throughly review and ask questions on what it did, or why, ask if we could have done it some other way instead. Hell ask it just the questions you need and do it yourself, or dont use it at all. I was working on C++ for example with a heavy use of mutexs, shared and weak pointers which I havent done before. LLM fixed a race condition, and I got to ask it precisely what the issue was, to draw a diagram showing what was happening in this exact scenario before and after.
I feel like Im learning more because I am doing way more high level things now, and spending way less time on the stuff I already know or dont care to know (non fundementals, like syntax and even libraries/frameworks). For example, I don't really give a fuck about being an expert in Spring Security. I care about how authentication works as a principal, what methods would be best for what, etc but do I want to spend 3 hours trying to debug the nuances of configuring the Spring security library for a small project I dont care about?
> Does this mean you'd be incapable of learning anything?
Yes. This strikes me as obvious. People don't have the sort of impulse control you're implying by default, it has to be learnt just like anything else. This sort of environment would make you an idiot if it's all you've ever known.
You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
I see it as being more personality/interest than impulse control. A curious/interested person would try and get involved and be a part of it, someone uninterested will just say what's the point and get by having the work done for them.
It may very well have stunted my learning. What’s the point of absorbing information when you have a consortium of experts available 24/7?
Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
Supposedly because AI has limits and you still have to know what you're doing so you can guide it and do it better.
If that's not true, then what's the problem with not learning the material? Go do something more productive with your time if the personal curiosity isn't good enough. Were in a whole new world.
>Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
This is true, and I can't answer that 100% confidently. I imagine I would just be doing more more/complicated things and learning higher level concepts. For example, if right off the bat I could produce a web app, Id want to deploy it somewhere. So Id come across things like ssh, nginx, port forwarding, jars, bundles, DNS, authentication, etc. Do this a 1000 times just the way I wrote 1000 different little functions or programs by hand and you'll no shit absorb little here and there as issues come up. Or maybe if whats hard a year ago is easy today, Id want to do something far more incredibly complex than anything anyone's been able to imagine before, and learn in that struggle.
Programmers in the 90s were far more apt at understanding CPU registers, memory and all sorts of low level stuff. Then the abstraction moved up the stack, and then again and again. I think same thing will happen.
Also, you can't say Im in a privileged position for already knowing how to code and at the same time asking what's the point of learning it yourself.
The problem is that the abstraction level moved up so far that we're now programming in the English language, and we're more like managers than programmers. This will only get worse. The next step will be that AIs run entire companies. And BigAI will not allow us to profit from that because they will just run the AI themselves, the current situation was just a stepping stone.
As an older person, I'm not worried. The world changes all the time. People are put people in difficult situations, and they have to adapt. "Oh no, how will people learn things?" is not that big of a struggle in the grand scheme. We're not burning books or giving people lobotomies. People can still learn if they want to, easier than ever before. Businesses will adapt, people will adapt, by necessity. Things will be very different, sure. But then we get used to the difference, and it becomes normal.
Kids today couldn't imagine how people used to live just 100 years ago, like it was the dark ages. People from that age would probably look at kids 10 years ago and think, these poor children! They don't know how to work hard! They don't know anything about life! They're glued to these bizarre light machines! Every age is different.
At the beginning of the internet, I used to save all webpages where I’d find info, just in case I would be stuck without a connection or if the website removed it. I had parts of the MDN.
The internet never fell. I bet it’ll be the same with AI. You will never not have AI.
The big difference is the internet was a liberation movement: Everything became open. And free. AI is the opposite: By design, everything is closed.
We have Copilot and Claude Code. The rationale of having both is that Copilot is cheap anyway, grants access to a range of models, and some developers still haven’t moved over to CC. There’s also the smart auto completion which is actually free now but AFAIK there’s no Claude Code equivalent.
Whom do we trust regulation with? Current US admin which is being run by team idiocracy, Europe that is run by senile men who don't even understand tech or can't even come to a consensus on smallest of issues or China which only does things that benefit their autocrats?
The issue is much more complex than "just regulate it" unfortunately.
Sure, but the reality is that the United States where these companies are headquartered currently has the exact opposite policy: Anthropic has been blacklisted by the DoW (and replaced by OpenAI) because the US administration thought that the very limited amount of self-regulation Anthropic insisted on was going too far.
We need an AI workers union. The real power and discernment is in the hands of the people building these systems. They are extremely difficult to replace and firing them basically guarantees they go to a competitor.
https://notdivided.org/ is basically validation that there is appetite for something like this amongst them.
I’m all for regulation of AI, but that’s not a serious solution where the problem is the government pressuring private companies to do evil things. Consumer pressure isn’t much, but it’s not nothing.
> Next week Anthropic will do something evil and everyone will be moving back to OpenAI.
Anthropic has been, relatively speaking, the most responsible of the frontier labs since its founding. There has never been a point at which OpenAI took a more measured and reasonable approach while Anthropic proceeded dangerously.
These are relative terms, but you'd have to not be paying attention to find this plausible.
Cancelling my account may be a small action but it is not pointless. Expressing my views and voting with my wallet is my right. Even your seemingly pointless question is a good reminder of the impact we can have - thanks!
I was reading your other blog post about storing them in bitwarden I have to disagree with this point:
> Unless you were forced to by some organisational policy, there’s no point setting up 2FA only to reduce the effective security to 1FA because of convenience features.
2FA both stored in your password manager is less secure than storing than separately, but it still offers security compared to a single factor. The attack methods you mentioned (RAT, keylogger) require your device to be compromised, and if your device is not compromised 2fa will help you.
To slip into opinion mode, I consider my password manager being compromised to be mostly total compromise anyway.
Also I really like the style and font of your blog.
> To slip into opinion mode, I consider my password manager being compromised to be mostly total compromise anyway.
But how is that no the entire point? If your 2FA is a proper device, like a Yubikey, the attack surface is tinier than tiny and the device ensures that your secret never leaves the device.
We did see cases of passwords managers getting compromised. We haven't seen yet a secret being extracted from a Yubikey.
So where you say you consider that your software (password manager) getting compromised is total compromise, we're saying: "as long as the HSM on a Yubikey does its job, we have actual 2FA and there cannot be a total compromise".
You're right, I should have been more clear in that I meant a local compromise of the machine running the password manager client, not the server running the password manager itself. If my sessions and all of my data can be intercepted, the yubikey 2fa seems like it's only saving me from a token "nobody can login remotely to this one service" which at that point seems pretty moot
Yubikey offers a false sense of security in that regard, unfortunately, because if your device is thoroughly 0wned and you don't know it, the attacker "just" has to wait for the victim to do something that would trigger the yubikey, and then swap in their forged request instead. Eg if the victim uses the yubikey to log into bank1 and to crypto wallet, but bank1 accounts have no money, instead of waiting for the customer to log into their crypto wallet with the yubikey, the attack software waits for the victim to log into bank1, but swaps in a request to the crypto wallet instead.
This isn't a footgun, you just have absurd security requirements.
>It should be pretty obvious that using a passkey, which lives in the same password manager as your main sign-in password/passkey is not two factors. Setting it up like this would be pointless.
You simply do not need two factors with passkeys. Using passkeys is not pointless, they are vastly more secure than most combined password+2fa solutions.
There are extremely few contexts where an yubikey would be meaningfully safer than the secure element in your macbook.
To be clear. Proper 2FA, via something like a smartcard or any truly external device is still much more secure. You could have one of those factors be a passkey, that's fine, and may be a good idea.
But there are UX issues with passkeys as well, that aren't all well addressed. My biggest gripe is that there is often no way to migrate from one passkey provider to another, though apparently there may be a standard for this in the works?
Not who you are replying too. But a yubikey is not a weak factor.
In fact, it’s not even meaningfully more secure than passkey (as passkey is designed) - passkey is, however, more convenient.
So it’s more ‘one weak factor + (really times) one medium/strong factor’ vs ‘one medium/strong factor’.
Which yes, the first one is better in every way from a security perspective. At least in isolation.
The tricky part is that passkeys for most users are way more convenient, meaning they’ll actually get used more, which means if adopted they’ll likely result in more actual security on average.
Yubikeys work well if you’re paying attention, have a security mindset, don’t lose them, etc. which good luck for your average user.
if 2fa is "use the second factor that's on same device as first factor" (like when using phone apps in many cases, password + 2fa from email/sms/authenticator app on same device), I disagree.
> It should be pretty obvious that using a passkey, which lives in the same password manager as your main sign-in password/passkey is not two factors. Setting it up like this would be pointless.
If your password manager is itself protected by two factors, I'd still call this two-factor authentication.
Passkeys can absolutely constitute two factors. At least the iOS and Android default implementations back user verification (which the website/relying party can explicitly request) with biometric authentication, which together with device possession makes them two factor.
That's not what two-factor means. Forget about passkeys -- if you use a password manager, and that password manager has a biometric lock, your accounts don't thereby have a biometric lock as a second factor. The transitive property doesn't apply here.
Someone gotta tell all these SaaS about that if so, because currently everyone is treating Passkeys as an alternative to 2FA. Take a look at how GitHub handles it for example when you use TOTP, they'll ask you to replace TOTP with passkeys.
They are an alternative to 2FA. Which means they aren't 2FA. If they were 2FA, they wouldn't be an alternative to 2FA. They'd just be 2FA.
Anyway, passkeys and FIDO broadly aren't the same thing. You can read the definition of passkeys at https://fidoalliance.org/passkeys/ or look at any of the marketing, which invariably talks about how great it is that you don't have to futz with passwords anymore.
FIDO credentials in general can obviously also be used as second factors. This is baked into the name of the original standard: U2F, Universal 2nd Factor. The specific point of passkeys though is that they're the single factor.
Many do what you describe, probably because some manager somewhere needs to tick some checkbox.
But GitHub, specifically, allows you to sign in with a passkey. On the sign-in page, there's a "sign in with passkey" link. It activates my 1Password extension, asking if I want to use my passkey. I say yes, and I'm in, I don't type anything. This also works the same way with my YubiKey.
I think Altman probably rationalised it to himself by thinking that if he doesn’t do it, Musk/xAI will, and they give zero fucks about safety. So maybe he told himself that it’s better if OpenAI does it.
The billion engineers building sandbox tools at the moment are missing the point. Sandboxing doesn't matter when the LLM is vulnerable to prompt injection. Every MCP server you install, every webpage it fetches, every file it reads is a threat. Yeah you can sit there and manually approve every action it takes, but then how is any of this useful when you have to supervise it constantly? Even Anthropic say that this doesn't work because reviewing every action leads to exhaustion and rubber stamping.
The problem is not what the LLM shouldn't have access to, it's what it does have access to.
The usefulness of LLMs is severely limited while they lack the ability to separate instructions and data, or as Yann LeCun said, predict the consequences of their actions.
Prompt injection is hard but I believe tractable. I've found that by having a canary agent transform insecure input into a structured format with security checks, you can achieve good isolation and mitigation. More at https://sibylline.dev/articles/2026-02-22-schema-strict-prom...
In recent years I’ve noticed a massive rise in what could be called ‘financial degeneracy.’ Shitcoins, NFTs, sports betting, even stuff like Pokemon cards just seem to be a vehicle for trying to make a quick buck and find a bigger idiot. Prediction markets are a new one in the series. Probably linked to economic nihilism. Young people giving up on the dream of success via traditional means.
People say you don’t need Redis, you can just use Postgres LISTEN/NOTIFY. I read a blog post about it locking the entire database and causing problems[0]. Apparently it’s fixed now but this only just being discovered in 2025 doesn’t give me confidence it’s battle tested for prod.
At some point shoving everything into Postgres is an anti pattern too.
Yes, I’m more worried about the false confidence such technology could create. Implement an authenticity mechanism and it will be treated as truth. Powerful people will have the means to spoof photographic evidence.
My biggest lessons were from hours of pain and toil, scouring the internet. When I finally found the solution, the dopamine hit ensured that lesson was burned into my neurons. There is no such dopamine hit with LLMs. You vaguely try to understand what it’s been doing for the last five minutes and try to steer it back on course. There is no strife.
I’m only 24 and I think my career would be on a very different path if the LLMs of today were available just five years ago.
reply