It could, but that would make it less useful for everyone else. Pushing back against what the user wants is generally not a desirable feature in cases where the user is sane.
It may be helpful to re-read the topic being discussed. This guy was talking to ChatGPT about how he was the first user who unlocked ChatGPT's true consciousness. He then asked ChatGPT if his mother's printer was a motion sensor spying on him. ChatGPT agreed enthusiastically with all of this.
There should be a way to recognize very implausible inputs from the user and rein this in rather than boost it.
There's certainly a way to do this, poorly. But it's not realistic to expect an AI to be able to diagnose users with mental illnesses on the fly and not screw that up repeatedly (both with false positives, false negatives, and lots of other more bizarre failure modes that don't neatly fit into either of those categories).
I just think it's not a good idea to try to legally mandate that companies implement features that we literally don't have the technology to implement in a good way.
Pushing back when the user is wrong is a very desirable feature, whatever the mental health of the the user. I can't think of any scenario when it's better for an LLM to incorrectly tell the user they're right, instead of pushing back.