Lots of code would break because they assume they can do signed math with time_t, though. It's a less invasive change to make it wider: largely, it's just a recompile, except for code that persists a time_t directly or sends it directly over the network (and both of these should be considered harmful for other reasons).
The other problem is that more unsafe code in libraries, etc, will happily cooperate with 2038-safe unsigned time_t code, but will start to do bad things shortly before 2038.
A) will be fine until 2038. I assume we have a lot of things that mashes time into an int or long. But such code is no worse off by virtue of the C library types being fixed.
B) -- manual calculation of size of structs instead of sizeof() -- yah, maybe it'll happen. I don't see much code this bad. If it's ever compiled on a different word length it's already been fixed.
C) Perhaps. For the most part alignment improves when you have a wider time_t, but you could have people counting the number of 32 bit fields and then needing 16 byte alignment for SSE later. Again, for the most part this penalty has been paid by code compiling on amd64, etc.
Honestly, for network code it makes more sense when the data is received to add the time to the current era. A 64 bit timestamp is an extra 4 bytes of overhead. However, the biggest issue with a network protocol is you just can't force everyone to update everything.
Now it's some in house protocol sure you can just update everything instead making the timestamp a relative value to the current era.
For code running locally 64 bits is less of issue, just mainly a problem of ABI breaks.
Honestly, one thing I think people overlook is file-formats... A lot them have 32 bit time fields. Also unlike a network packet the file could actually be from a previous 32 bit era. So those timestamps are ambiguous after 2038.
Half measures just create even more headaches down the road. Migrating to 64 bit time_t basically solves the problem once and for all. If you're going to make a change, make it the last change you'll ever need.
I'm also in favour of adopting IPv6 ASAP, but so far that has been a much harder sell.
Optimistically assume we as a species manage to survive to the point where distance, or relativistic speed differences, cause sufficiently frequent change in the observation of time passing that a single number, of any size, is no longer sufficient.
time_t is sufficient within bounds. It is expedient and quite correct in many computer science use cases. It can be extended with small additions for many other use cases.
However those bounds are a set of assumptions and simplifications that shouldn't be forgotten. I agree that the problem would be solved until the next paradigm shift in our understanding of time and the universe, and maybe forever if it turns out that the rules are cruel or we're too stupid to reach a more complex situation. I just wouldn't say once and for all, there's far too much uncertainty there.
Ruler of great size, less useful when measured aspects are a pile of disconnected threads rather than a canvas that is mostly shared and mostly distorted the same way.
Distance won't be a problem. 2^64 seconds takes us past the point where the expansion of the universe is such that anything you are not gravitationally bound to is outside your cosmological horizon.
You'll be in a much bigger universe, but it will be empty except for your local galaxy group.
The tradeoff there is that you would be unable to use time_t to express times before 1 Jan 1970 (iinm). That may or may not be important depending on use case.
Yes. We're talking about growing time_t from int32_t to int64_t, instead of uint32_t. If you change it to uint32_t behind the scenes, some code will silently fail while compiling OK, because it was not expecting unsigned math.
This has seemed to be an unpopular observation, in the past.