I understand how laughable that sounds when I say it out loud. But the reality is, when I'm in a state of 'Tell LLM what to do, verify, repeat', it's really hard to sometimes break out of that loop and do manual fixes.
Maybe the brain has some advanced optimization where once you're in a loop, roughly staying inside that loop has a lower impedance than starting one. Maybe that's why the flow state feels so magical, it's when resistance is at its lowest. Maybe I need sleep.
>> it's really hard to sometimes break out of that loop and do manual fixes
it's not just an erosion of skills, it can also break the whole LLM toolchain flow.
Easy example: Put together some fairly complicated multi-facet program with an LLM. You'll eventually hit a bug that it needs to be coaxed into fixing. In the middle of this bug-fixing conversation go and ahead and fire an editor up and flip a true/false or change a value.
Half the time it'll go un-noticed. The other half of the time the LLM will do a git diff and see those values changed. It will then proceed to go on a tangent auditing code for specific methods or reasons that would have autonomously flipped those values.
This creates a behavior where you not only have to flip the value, the next prompt to the LLM has to be "I just flipped Y value.." in order to prevent the tangent that it (quite rightfully in most cases) goes off on when it sees a mysteriously changed value.
so you either lean in and tell the llm "flip this value", or you flip the value yourself and then explain. It takes more tokens to explain, in most cases, so you generally eat the time and let the LLM sort it.
so yeah, skill erosion, but it's also just a point of technical friction right now that'll improve.
This was a great comment. I don't know if it's common knowledge, but this really helped clarify how the shift happens.
I also remember half coding and half prompting a few months back, only to be frustrated when my manual changes started to confuse the LLM. Eventually you either have to make every change through prompting, or be ok with throwing away an existing session and add back in the relevant context in a fresh one.
I'm not yet at the point where I'm comfortable with just vibe coding slop and committing to source control. I'm always going in and correcting things the LLM does wrong, and it really sucks to have to keep a mental list of all the changes you made, just so you can tell your Eager Electronic Intern that you made them deliberately and to not undo them or agonize over them.
> But the reality is, when I'm in a state of 'Tell LLM what to do, verify, repeat', it's really hard to sometimes break out of that loop and do manual fixes.
My experience is rather that I am annoyed by bullshit really fast, so if the model does not get me something that is really good, or it can at least easily be told what needs to be done to make it exceptional, I tend to use my temper really fast, and get annoyed by the LLM.
With this in mind, I rather have the feeling that you are simply too tolerant with respect to shitty code.
I have the same problem. I had lines directly in front of me where I needed to change some trivial thing and I still prompted the AI to do it. Also for some tasks AI are just less error prone and vice versa. But it seems the context switch from prompting to coding isn't trivial.
And that’s exactly why I’ve stopped using llm’s entirely.
People who are using them frequently: you’re delusional if you think your brain is not harmed. I won’t go into great detail because I can’t be bothered and I’m sure this post will be down voted - but - I can share my own experience. Ever since I stopped using them my ability to focus, think hard and hold concepts in my brain and reason about them has increased immensely. Not only that but I re-gained the conditioning of my brain to ‘deal with the pain’ that comes with deep thought - all of that gets lost by spending too much time interacting with llm’s.
1. It says it is $8/month, which is not mentioned on the github page, so I had been thinking it was free in addition to being AGPL-3.0; it links to https://snapify.it/ which is where I see the fee.
2. It says "for everyone" but looks like it might be Linux-specific, and it doesn't say anything about which OSes are supported.
IIUC, the fee is just to use their instance, and hosting your own instance is actually free. Also, it looks like the client side of it runs in a browser, so it will support pretty much any OS.
This appears to be mercifully shorter and less intimidating than the must-have bible, "Curtis Roads. The Computer Music Tutorial. MIT Press, Cambs, MA, 1996".
It says it was originally published by Wiley in 2009, and the rights reverted to the author in 2025, whereupon the author released it on the net for free.
If someone wanted to start making computer music I'm not sure I'd recommend this or Curtis Roads' book as a starting point.
These aren't resources for getting started. They're more like encyclopedias for learning about DSP and tech once you've established the fundamentals of music and sequencing.
If a beginner wants practical knowledge for making records with electronic instruments I'd give them a DAW, teach them to record and sequence, teach them basic music theory, and then point them to something like Ableton's synthesis tutorials that will teach them about oscillators, envelopes, filters, LFOs, and basic sample manipulation.
It's not uncommon to have a regression test for compilers that are written in their own language (e.g. some C compilers): compile each new version with itself, then use that to compile itself again, then use the result on unit tests or whatever, which should yield the same results as before.
The point being that determinism of a particular form is expected and required in the instances where they do that.
(I'm not arguing for or against that, I'm simply saying I've seen it in real life projects over the years.)
GCC's build process does this. GCC is built 3 separate times, starting with the host compiler, then with the compiler from the previous step. If the output of stage 2 and 3 do not match the build fails.
That's front page news, in this era.
reply