Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just changing time_t to an unsigned int would take us all the way out to 2106.

This has seemed to be an unpopular observation, in the past.



Lots of code would break because they assume they can do signed math with time_t, though. It's a less invasive change to make it wider: largely, it's just a recompile, except for code that persists a time_t directly or sends it directly over the network (and both of these should be considered harmful for other reasons).

The other problem is that more unsafe code in libraries, etc, will happily cooperate with 2038-safe unsigned time_t code, but will start to do bad things shortly before 2038.


> it's just a recompile, except for code that persists a time_t directly or sends it directly over the network

Wouldn’t it also fail for all code that

a) stores the return value in a plain int and not a time_t as it would truncate (but at least this is a warning)

b) has time_t inside struts and allocates their size or arrays of structs assuming time_t is 4 bytes

c) has time_t inside structs and assumes the byte alignment of fields following it

Etc etc..


A) will be fine until 2038. I assume we have a lot of things that mashes time into an int or long. But such code is no worse off by virtue of the C library types being fixed.

B) -- manual calculation of size of structs instead of sizeof() -- yah, maybe it'll happen. I don't see much code this bad. If it's ever compiled on a different word length it's already been fixed.

C) Perhaps. For the most part alignment improves when you have a wider time_t, but you could have people counting the number of 32 bit fields and then needing 16 byte alignment for SSE later. Again, for the most part this penalty has been paid by code compiling on amd64, etc.


All of these fall under “harmful for other reasons”.


> Lots of code would break because they assume they can do signed math with time_t, though.

And lots of code would break because they want to talk about pre-1970 dates.

I'm definitely guilty of it, as I routinely use the timeline of the Great Emu War (1932-10-02/1932-12-10) to test time-bound features.


Honestly, for network code it makes more sense when the data is received to add the time to the current era. A 64 bit timestamp is an extra 4 bytes of overhead. However, the biggest issue with a network protocol is you just can't force everyone to update everything.

Now it's some in house protocol sure you can just update everything instead making the timestamp a relative value to the current era.

For code running locally 64 bits is less of issue, just mainly a problem of ABI breaks.

Honestly, one thing I think people overlook is file-formats... A lot them have 32 bit time fields. Also unlike a network packet the file could actually be from a previous 32 bit era. So those timestamps are ambiguous after 2038.


Half measures just create even more headaches down the road. Migrating to 64 bit time_t basically solves the problem once and for all. If you're going to make a change, make it the last change you'll ever need.

I'm also in favour of adopting IPv6 ASAP, but so far that has been a much harder sell.


Optimistically assume we as a species manage to survive to the point where distance, or relativistic speed differences, cause sufficiently frequent change in the observation of time passing that a single number, of any size, is no longer sufficient.

time_t is sufficient within bounds. It is expedient and quite correct in many computer science use cases. It can be extended with small additions for many other use cases.

However those bounds are a set of assumptions and simplifications that shouldn't be forgotten. I agree that the problem would be solved until the next paradigm shift in our understanding of time and the universe, and maybe forever if it turns out that the rules are cruel or we're too stupid to reach a more complex situation. I just wouldn't say once and for all, there's far too much uncertainty there.


2^64 seconds is 584.9 billion years. I think it's pretty safe to kick the can nearly 600 billion years down the road.


It's signed so only 2^63-1s, or 292 billion years.

'bout the same as tomorrow really, better do nothing, for in the end all is dust.


In retrospect we should have made it unsigned, but set the epoch at the Big Bang.


2^63 is just 292 billion years. Beware.


~1/4th to 1/40th the length of time where star formation will still be possible. Woefully insufficient.


Ruler of great size, less useful when measured aspects are a pile of disconnected threads rather than a canvas that is mostly shared and mostly distorted the same way.


Distance won't be a problem. 2^64 seconds takes us past the point where the expansion of the universe is such that anything you are not gravitationally bound to is outside your cosmological horizon.

You'll be in a much bigger universe, but it will be empty except for your local galaxy group.


OpenVMS took this approach for their POSIX libs: https://www.zx.net.nz/mirror/h71000.www7.hp.com/2038.html


The tradeoff there is that you would be unable to use time_t to express times before 1 Jan 1970 (iinm). That may or may not be important depending on use case.


Another tradeoff is that you can't just subtract two time_t's and get a negative value for an interval. I'd wager this is a far more common problem.

E.g. you can't do...

    time_t completion = time(NULL) + 30;
    
    do {
        time_t remaining = completion - time(NULL);
    
        /* ... */
    } while (remaining > 0);
Without risking a sporadic infinite loop.


We can convert that unsigned 32-bit number in to a 64-bit number:

    #include <stdint.h>

    int64_t completion = (int64_t)time(NULL) + 30;
    
    do {
        int64_t remaining = completion - (int64_t)time(NULL);
    
        /* ... */
    } while (remaining > 0);
There’s some slowdown doing 64-bit int math on a 32-bit system, but the above works.


Yes. We're talking about growing time_t from int32_t to int64_t, instead of uint32_t. If you change it to uint32_t behind the scenes, some code will silently fail while compiling OK, because it was not expecting unsigned math.


That’s still a breaking change, although somewhat less of a break. Better to just do it right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: