There are opportunity costs to consider along with relevance. Suppose you are staying at my place. Are you going to read the manual for my espresso machine in total or are you going to ask me to show you how to use it or make one for you?
In any case, LLMs are not magical forgetfulness machines.
You can use a calculator to avoid learning arithmetic but using a calculator doesn’t necessitate failing to learn arithmetic.
You can ask a question of a professor or fellow student, but failing to read the textbook to answer that question doesn’t necessitate failing to develop a mental model or incorporate the answer into an existing one.
You can ask an LLM a question and blindly use its answer but using an LLM doesn’t necessitate failing to learn.
There’s plenty to learn from using LLM’s including how to interact with an LLM.
However, even outside of using a LLM the temptation is always to keep the blinders on do a deep dive for a very specific bug and repeat as needed. It’s the local minima of effort and very slowly you do improve as those deep dives occasionally come up again, but what keeps it from being a global minimum is these systems aren’t suddenly going away. It’s not a friend’s expresso machine, it’s now sitting in your metaphorical kitchen.
As soon as you’re dealt with say a CSS bug the odds of seeing another in the future are dramatically higher. Thus optimizing for diminishing returns means spending a few hours learning the basics of any system or protocol you encounter is just a useful strategy. If you spend 1% of your time on a strategy that makes you 2% more efficient that’s a net win.
Sometimes learning means understanding, aka a deep dive on the domain. Only a few domains are worth that. For the others, it's only about placing landmark so you can quickly recognize a problem and find the relevant information before solving it. I believe the best use case of LLMs is when you have recognized the problem and know the general shape of the solution, but have no time to wrangle the specifics of the implementation. So you can provide the context and its constraint in order to guide the LLM's generation, as well as recognize wrong outputs.
But that's not learning or even problem's solving. It's just a time saving trick. And one that's not reliable.
And the fact is that there's a lot of information about pretty much anything. But I see people trying to skip the foundation (not glamorous enough, maybe) and go straight for the complicated stuff. And LLMs are good for providing the illusion that it can be the right workflow.
> Only a few domains are worth that. For the others, it's only about placing landmark so you can quickly recognize a problem and find the relevant information before solving it.
Well said. You can only spend years digging into the intricacies a handful of systems in your lifetime, but there’s still real rewards from a few hours here and there.
I would remember the reply from the LLM, and cross references back to the particular parts of the RFC it identified as worth focusing time on.
I’d argue that’s a more effective capture as to what I would remember anyway.
If wanted to learn more (in a general sense) I can take the manual away with me and study it, which I can do more effectively on its own terms, in a comfy chair with a beer. But right now I have a problem to solve.
Reading it at some later date means you also spent time with the LLM without having read the RFC. So reading it in the future means it’s going to be useful fewer times and thus less efficient overall.
IE LLM then RFC takes more time then RFC then solving the issue.
Only if you assume a priori that you are going to read it anyway, which misses the whole point.
Because you should have read RFC 1331.
Even then your argument assumes that optimising for total time (to include your own learning time) is the goal, and not solving the business case as a priority (your actual problem). That assumption may not be the case when you have a patch to submit. What you solve at what time point is the general case, there’s no single optimum.
You’re assuming your individual tasks perfectly align with what’s best for the organizations which is rarely the case.
Having a less skilled worker is a tradeoff for getting one very specific task accomplished sooner, that might be worth it especially if you plan to quit soon but it’s hardly guaranteed.
No, just basic judgement and prioritisation, which are valuable skills for an employee to have. The OP was effective at finding the right information they needed to solve the problem at hand: In about an hour, the OP knew enough about PPP to fix the bug and submit a patch.
Whereas it's been all morning and you're still reading the RFC, and it's the wrong RFC anway.
You produced a passive-aggressive taunt instead of addressing the argument.
For clarity: nobody was asking about your business decisions, nobody is intimidated by your story. what your personal opinions about "attitude" are is irrelevant to what's being discussed (LLMs allowing optimal time use in certain cases). Also, unless your boss made the firing decision, you weren't forced to do anything.
You’re still not getting it, not having a boss means I have a very different view of businesses decisions. Most people have an overly narrow view of tasks IMO and think speed, especially for minor issues, is vastly more important than it is.
> LLMs allowing optimal time use in certain case
I never said it was slower, I’m saying what’s the tradeoff. I’ve had this same basic conversation with multiple people, and after that failed the only real option is to remove them. Ex: If you don’t quite understand why what you wrote seemingly fixes a bug don’t commit it yet, seems to work isn’t a solution.
Could be I’m not explaining very well, but ehh fuck em.