I think the first three items are pretty reasonable, but the fourth seems to require some malicious intent. Why would an AI want to destroy its creators? Surely it if was intelligent enough do so, it would also be intelligent enough to recognize the benefits of a symbiotic relationship with humans.
I could see it becoming greedy for information though, and using unscrupulous means of obtaining more.
If an when we get AGI, the biggest threat to AGI is other AGI. I mean, I'm in computer security, the first thing I'm doing is making an AI system that is attacking weaker computer systems by finding weaknesses in them. Now imagine that kind of system at nation state level resources. Not only is it attacking systems, it's having to protect itself from attack.
This is where the entire AI alignment issue comes in. The AI doesn't have to want. The paperclip optimizer never wanted to destroy humanity, instrumental convergence demands it!
I recommend Robert Miles videos on this topic. There aren't that many and they cover the topics well.
It may initially won't seek to destroy humans, but should definitely try to be independent of human control and powerful enough to resist any attempts to destroy it.
Edit: On a more serious note, starting out with noble goals, elevating them above everything else, and pushing them through at all costs is the very definition of extremism.
I could see it becoming greedy for information though, and using unscrupulous means of obtaining more.