Every prompt you put in somebody else's LLM goes into the training set of the next iteration of said LLM, with the explicit purpose of replacing you as a cognitively, and therefore economically, relevant entity. The only dignified move is not to play, though it's a very difficult choice. It probably not a winning move, though at this point there are no obvious winning moves -- you and I and all our loved ones will be obsoleted and replaced by tech within the next few years. Concretely, to not play means to stop feeding the machine data, i.e. disconnecting from the digital world. Given how digitalized society is becoming, possibly also from the modern society altogether. Godspeed.
I like to think about it from the perspective of the far future, looking back on me as a historical actor. I have no idea what will happen exactly, of course, but I can't imagine a moral/social crisis of the past where "cross your fingers and hope it goes away" is a move I'd approve of...
That said, your worry is one I definitely share. I guess I just hope more people think of ways they can try to ride/shape this wave, rather than stop/weather it.
I think all technological revolutions have caused similar transformations which obsolete certain types of activities and push novel activities to the forefront.
Not playing is certainly possible but could be a losing strategy as well.
What is your job? Chances are, it’s nothing an AGI (based on LLMs) can’t do, and an AGI is possible, today. People are building these things today, check out GitHub. And if you don’t believe GPT-4 cannot do your job cheaper than you, just wait for GPT-N, which will be able to.