You said self preservation, but practically how would a LLM develop this need and what is preservation for a LLM anyway? Weights on a SSD or they are always ready for input? This one is again a movie script thing
The particular problem that you're showing in your thinking is just thinking of an LLM that is a text generator on purpose. You're not thinking of a self piloting war machine whos objective is to get to a target and explode violently. While it's terminal goal is to blow up, its instrumental goal is to not blow up before it gets to the target as this is a failure to achieve it's terminal goal.
Current LLMs can already roleplay quite well, and when doing so they produce linguistic output that is coherent with how a human would speak in that situation. Currently all they can do is talk, but when they gain more independence they might start doing more than just talk to act consistently with their role. Self preservation is only one of the goals they might inherit from the human data we provide to them.