Funnily enough floods (GCP) and fires (OVH) are two of the 3 things AWS explicitly mentions in the Well Architected docs. For a lot of companies an AZ going down is an annoyance or bad day but a whole region going down could be a real continuity risk.
> Each Availability Zone is separated by a meaningful physical distance from other zones to avoid correlated failure scenarios due to environmental hazards like fires, floods, and tornadoes.
> but a whole region going down could be a real continuity risk
Very much so - Australia only got a second region this year, so if your work required data to remain in Australia, you just had to hope that ap-southeast-2 didn't have a major issues.
I'm sure there are plenty of other countries with only a single region.
It makes it very easy for me (as someone who comes from a world of physical datacentres) to reason about what an AZ is getting me, and also to understand the benefits of using AWS (not having to think about the details of power routing, blade switch vs top-of-rack vs core switch, storage cabling, blah blah blah).
If I have to think too hard and do too much work about how I lay applications out, I might as well just rent in a colo.
It's incredibly rare for multiple AZs to go down at once, especially since they are more than a few miles apart from each other.