You're unlikely to have a std::vector containing 2 billion single byte elements however.
Even being conservative, say you had a struct of size 24 bytes (e.g. 3 doubles x, y and z) then now you're up to 48G.
So yes, there are very few situations where you'll have meaningful data that would fill a vector indexed by signed ints without running out of memory first.
Can you imagine how stupid it would feel if you had to abandon all your std::vector-using code the moment you need to deal with an array bigger than 2 billion elements?
Actually, thinking about it a bit more, the only thing you could store in a std::vector indexed by signed ints where you would have a problem would be single bytes.
If you have 2 billion 2 byte elements then that's 4gb which is the total addressable space on a 32-bit system - leaving no room for program code or any other data meaning you couldn't run anything.
You could go 64bit, but then signed ints go up to (2^63)-1 and you have the same problem - elements more than a single byte in size will cause you to run out of addressable memory before you exhaust available signed ints.
Yes, and like a vector containing > 2 billion single byte elements it's still delving in to a very peculiar use case.
Compared to the majority of other uses of std::vector which will never be able to overflow signed ints.
I get from a logical point of view that size() would never be < 0 therefore it makes sense to use an unsigned int, however from a practical point of view it also makes programs more susceptible to a number of pernicious bugs that can catch out unsuspecting programmers, e.g. things like
for ( size_t i = v.size(); i >= 0; --i )
{
...
}
Which would work fine with a signed int, but here it will just loop infinitely.