Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hot take…

Storing anything as UTC was a mistake and we should be using TAI for all storage and computation, only transforming into more human friendly formats for display to end users. This never needed to be a problem except we decided to make it harder to use TAI than UTC and so everything got built up off the backs of legacy bios level hardware supported UTC style clock behaviour, when we should have been using TAI from the start. Yes I know it would have been harder, but we got off our collective asses and decided to fix our short sighted decision making for Y2K date storage, why not this… if it truly costs as much for everyone to endure a leap second why wasn’t it just fixed from the bottom up and rebuilt correctly!



I'm not sure I support your "hot take" — it is hot and requires a lot of contemplation. But that's not the point, IMO.

The point is, that there ALREADY EXIST both TAI and UTC. TAI is true monotonic (whatever it means in a relativistic universe) and doesn't make any compromises. UTC abolishes monotonicity in order to keep both the length of a second and the time relationship to the orbital rotation. They both work. For whatever reason (for obvious reasons that is, but doesn't matter) UTC was chosen in virtually any software system to keep time.

So, ok, if there is a suspicion of leap seconds being unnecessary. How about moving from UTC time to TAI then? Let's keep UTC as it is, keep adding leap seconds and just make it a best practice to rely on TAI for all datetime operations and world clock synchronization? Maybe it will work out, maybe it won't, but at least you won't be breaking a perfectly working alternative (currently — mainstream) system.

The more I think about it, the more outrageously stupid abolishing leap seconds seems.


Eliminating leap seconds was only a half measure. They should have finished the job by adding rockets to the Earth to ensure that its rotation will stay at exactly 24 hours.

If they weren't going to do that, then why eliminate leap seconds? Kicking the problem down the road doesn't really solve the problem, it just makes it worse later.


> its rotation will stay at exactly 24 hours.

The earth rotate around itself in 23 hours and 56 minutes

https://en.m.wikipedia.org/wiki/Sidereal_time


It rotates itself in 23 hours and 56 minutes relative to the fixed stars. It rotates itself in 24 hours relative to the Sun. It is Earth's rotation relative to the Sun what people care about in most situations, because day and night depend on that, not on the position relative to some far-away stars.


24 hours is an average over the duration of the Earth's orbit around the sun.


Seems like a problem to be solved with more rockets!


Surely it’s all the test firings of Merlin and Raptor engines that’s slowing the earth down in the first place? I mean first he screws up twitter, now it’s time itself.

(/s)


He is absolutely, singlehandedly destroying Twitter-- no sarcasm.


> For whatever reason (for obvious reasons that is, but doesn't matter) UTC was chosen in virtually any software system to keep time.

What was chosen really isn't UTC. Several UTC seconds in the past are not accurately representable in unixtime. Several unixtime seconds in the past are ambigious as to which UTC second they are.

Unixtime is awfully close to UTC time, but it's not the same. If UTC time stops inserting leap seconds and never has negative leap seconds, then they will be equivalent going forward.


Not really. I mean, it's true that we should distinguish between the 2 and everything that you said about the difference is also true. But unixtime doesn't really "exist" in a sense UTC and TAI do. It is rather an imperfect implementation of UTC, that chooses to ignore (or repeat) some seconds.

You can hear that unixtime is the number of seconds passed since X. But it isn't really true though. The number of seconds is number of seconds, it isn't defined by our standards, it just exists. And TAI is a fair representation of how many seconds on Earth actually passed (on average) since whatever.

UTC kinda does it as well by virtue of 1 second being equal to 1 TAI second, but it actually counts (counted until yesterday) the number of Earth's rotations. It's just every rotation (represented by 24H) sometimes consists of more than 86 400 seconds.

Unixtime on each individual device counts nothing. It imperfectly represents UTC timestamp received over NTP. Some timestamps are represented twice by the same value. Of course, you can just put your device offline and call it "unixtime" whatever number of seconds it counts since any moment, but you know it will drift away from any meaningful "real" time soon enough.

(Also, it's not even entirely fair to say, as you did, that it is unixtime that was chosen by all the software. Many programs store datetimes as strings. Usually, they still don't support "23:59:60" anyway, but that doesn't really make them unixtime. Unixtime is a timestamp encoding.)

So, that's basically what I'm talking about: you can just make unixtime an implementation of TAI (as opposed to UTC). You can build a new calendar format for it, introduce a new name (not UTC!) for it and see how well it does when all the world slowly drift from Earth's rotation to keep up with TAI. Maybe it actually is fine, I'm not a judge for it (because it really is complicated and I didn't decide yet if it's a good solution or not). But why the fuck would you destroy UTC for it?! It is a closest usable representation of UT1, which doesn't stop to exist! Leave it be!


Consider that the most widely deployed time standard outside the UT framework is GPS time, which was initially synced with UTC but is de facto TAI-19, because it turns out that constant-rate monotonic time is useful and UTC fails at even the alleged astronomical use-case.


If you want to keep UTC matching the solar rotation, why do you specifically require leap seconds? Why not leap minutes? Or leap milliseconds? The choice of ±1 s as the acceptable error seems arbitrary.


It is worthwhile to look at the past; before leap seconds the disagreement with UT1 was handled by changing the rate of UTC slightly, and that's probably even worse than leap seconds. And for a while the unit of adjustment was less than a second, varying from 0.05 to 0.2 seconds. I believe enough people have complained about subsecond adjustments, and thus leap seconds have survived only because not enough people have complained at that time.


Because a minute is a huge amount of time (and, by the way, don't forget, that minutes don't exist, I just understand that you mean 60 seconds) and there is no such thing as "leap milliseconds" because UTC is literally just counting seconds. Basically, UT1 already is an implementation of what you call "leap milliseconds". Not literally, but achieves the same thing. And it is way too complicated to use it in practice outside of astronomy-specific tasks.

So, to sum it up:

- TAI is a real thing, it has a concrete meaning, and it's "leap infinity".

- UT1 is a real thing, but is unusable in practice, and you could think of it as "leap ms"

- UTC until yesterday was a real thing, meaning time, which has seconds equal to TAI-seconds, but not drifting from UT1 for more than 0.9 s. Since today it's broken and I'm not sure what it even means anymore — I mean, not in practice, but "platonically".

- Nobody just introduced a standard that would mean "time with seconds equal to TAI-seconds, but not drifting from UT1 for more than 59 s" yet. I guess you could be the one to do it, but I'm not sure it would get a wide adoption.


Why is UT1 unusable? Can we approximate it with something like a polynomial that gets updated periodically?


Pretty sure you are saying the same thing as the post you replied to.

They are recommending TAI for storage, compute, and not against UTC for human consumption

Although the easier hack: Abolish leap seconds from UTC!


I now realized that the parent meant everything should have been TAI from the beginning, which is indeed a valid take. We can't switch to TAI today only because we are already using UTC.

Original comment: Only those who never tried to actually use TAI yourself can claim that you can use TAI instead of UTC without a problem.


Just have 37 leap seconds to bring UTC into sync with TAI, then lock them together.

That won't break anything...


Yes, that's what the CGPM eventually decided to do because they know UTC will have to stay.


As spacecraft start making return trips to Earth we'll run into similar adjustment problems because TAI doesn't progress at the same rate at different altitudes and accelerations, so there'll need to be a re-sync at some point. Satellites already have a similar problem but can just use direct synchronization with Earth since it's so close. In theory spacecraft could do the same direct synchronization to Earth time, but e.g. a computer on the dark side of the moon would need additional relay(s) to stay in sync.

I'm not sure why we don't define an intergalactic time standard and approximate that everywhere with NTP-like protocols; one monotonic clock at rest (with respect to CMBR) in free space. The second is weirdly defined/tracked in Earth's gravity well.


Right now there are no people outside Earth's gravity well for significant time.

As long as the Earth exists and you can communicate with people there, there's no practical reason not to use an earth based reference clock.


Most of the accurate timekeeping is needed by computers, not humans. We can always define local human time with an affine transform from a coordinated monotonic clock.


TAI is independent of the altitude.

The clocks that were used to build TAI (its a co-ordinated average of dozens of atomic clocks around the world) became sufficiently accurate that the difference in each second based on the altitude of the clock measuring it in the early 70s. As a consequence of this it was decided that as of 1 January 1977 00:00:00 TAI would be corrected to correspond to what TAI should be if measured by clocks at the geoid (mean sea level) and as a result it has no relation to altitude or accelerations. There is also (because metrologist are like this sometimes) a continually published version of what TAI was before 1 January 1977 00:00:00 but it is now named EAL (Échelle Atomique Libre, meaning Free Atomic Scale)

In addition to this, we have already designed and maintain equivalent time standards to TAI, but for the Earth's barycentre Geocentric Coordinate Time (TCG - Temps-coordonnée géocentrique) which is roughly speaking TAI for a clock orbiting the sun, where the earth moon barycentre orbits, but without the earth & moon gravitational influence... and for the entire solar system Barycentric Coordinate Time (TCB, from the French Temps-coordonnée barycentrique) which is roughly speaking again, equivalent to a TAI style clock but this time subtracting the entire solar system, as if a clock keeping TAI was just orbiting the galaxy at the barycentre of the entire solar system.

The cutting edge of this is building up astronomical data on ultra stable pulsars to use as "external" reference clocks far outside the solar system, but the complexities of subtracting the effect of all the universe the pulsars' radiation beams pass through before they get to us, makes it quite challenging. But the utility for deep space navigation has made it an active funded path of research for at least the last decade. (to get a GPS equivalent at lunar distance and beyond where it becomes rapidly impractical to have a GPS like orbiting constellation due to inverse square radio broadcast power limits, good radio can pick up GPS at the moon but the location precision out at that distance... is not great)

The cosmic microwave background dipole may be indicative that we can use it as an absolute frame of reference but settling that with enough certainty to base an official time standard on it, seems like some time away based on the state of things between cosmology, astronomy, astrophysics and metrology.


Is sea-level rise predicted to mess with TAI, then?

I really hope we can get something like TCB standardized for computer use with affine transform to human-usable times.


The issue doesn't go away.

> more human friendly formats for display to end users.

This is what's doing the very heavy lifting in your proposal. Leaving aside not knowing when future leap seconds will occur (and thus getting mismatches when broadcasting to different computers that may or may not get the information about the leap seconds at different times) the sheer fact of the matter is that software developers are users too. They will take shortcuts and display TAI as UTC because "something, something people are lazy or uneducated."

We do not need leap seconds. We never should have implemented them. They are a scar on our software for potentially hundreds of years already for any application that seeks to have high accuracy over time.

Time is very frequently a join key or part of a join key in a database and these small differences mean countless hours wasted to investigate "couple of record" mismatches.

Just stop using leap seconds. We will be fine.


You've made a very good point. All new software/systems I build will use TAI64. As an industry we should just push this move ourselves


Libraries are already available. See https://cr.yp.to/time.html and the pages linked from there.


Yep, and here it is one in Erlang – Taider https://github.com/secYOUre/taider


Arguably, being able to perform calendar calculations without leap-second information (which you don’t have for the future) is more important in applications than maintaining to-the-second accuracy over calendrical timescales.

What applications and date-time libraries should really do is differentiate between timestamps, calendar plus wall-clock time, and elapsed runtime. In most circumstances, only the latter would really need to be consistent with TAI.


You don't have dst offsets for the future either. Those keep changing. That's not a blocker.

The straight up truth is that we created something in between. Some parts of earth sun alignment are in the time zone abstraction layer and leap seconds are in the seconds count layer. There's no real cause for this and we should have moved to TAI to fix this blunder long ago.


You don't have DST offsets at all in UTC, which is probably how you are storing timestamps, so you can always compute a delta between timestamps if you don't have leap seconds.


That's the whole point i'm making here. UTC and leap seconds could always have been separate in the same way timezone+dst offsets are separate. In fact leap seconds could have just gone into the timezone offsets directly and it'd all work fine (we'd probably still call it UTC +10 rather than UTC +10:00:27 but that's not a problem and to be honest we'd probably not bother at all until the offset was big anyway).

There's no reason this wasn't done many years ago except for a blunder on the part of CGPM which they are now working to correct.


Well, in principle there’s TAI for that. But civil time in most countries is defined in terms of UTC/GMT.


I agree actually. A lot of the blame is on unixtime trying to roughly align to UTC rather than TAI.

But we're here now and we have to fix this. So anything that moves us away front his problem is helpful.


Absolutely! A large part if h the blame here rests on Unixtime (which is a monotonic count of seconds elapsed) trying to use a time standard where those seconds are not innfact monotonic. Unixtime is basically “TAI done wrong” and the failure to correct this early on … ideally they should have aligned themselves to TAI instead of UTC since originally Unixtime was not actually aligned to any particular time standard then in the mid 70s it was decided to align it with elapsed seconds of UTC time as of a fixed date.

This decision just dominoed down through time causing enough friction that we computer programmers outweighed the metrologists and they caved and abandoned properly keeping UTC in order to stop causing problems for everyone else due to our continued failure to fix the root cause of these issues.


What would using TAI solve exactly? I'm unfamiliar.


TAI is monotonic clock, and isn't adjusted for solar time of day, it could be considered universal as TAI would be the same between any 2 points, but UTC is adjusted for earths rotation, so a theoretical mars UTC would end up out of sync with earth UTC.

EDIT: info below is incorrect about UTC not being monotonic, as pointed out in thread but is useful for monotonic vs non-monotonic:

In UTC you can jump forward or back, so it's possible to do an operation after another operation but have a timestamp before it, which is bad for many reasons, top being auditing.

do operation one at T0 do operation two at T1 do operation three at T-1

in TAI it would always be do operation one at T0 do operation two at T1 do operation three at T2


Link for anyone interested: https://en.wikipedia.org/wiki/International_Atomic_Time

I'm not sure how this differs much in practice from UNIX time


I'm pretty sure UTC is monotonic too, its unix timestamps that really are the true mess


Yeah you are right, when a leap second is introduced it becomes 23:59:60 (monotonic) in utc, while with unix-time a normal way to handle it is to repeat 23:59:59 twice (non-monotonic).


Leap seconds can be negative.


A negative leap second means that 23:59:59 is skipped, you go from 23:59:58 to 00:00:00, which is monotonically increasing, in both UTC and unixtime.

Positive leap seconds are monotonically increasing in UTC, where you get 23:59:60 between 23:59:59 and 00:00:00, but not in typical implementations of unix time where 23:59:59 repeats, and with milliseconds you go (breifly) 58.999 -> 59.0 -> 59.999 -> 59.0 -> 59.999 -> 0.0

If you're only counting seconds, then both UTC and unixtime are always monotonically increasing.


Wrong. The very topic of this discussion, leap seconds, are non-monotonic adjustments to UTC.


UTC is monotonic, even during leap seconds. (A positive leap second adds a 23:59:60; no timestamp ever repeats, the clock never moves backwards.)

(Negative leap seconds are similar, and do not affect the monotonic property.)

POSIX/Unix timestamps, however, are non-monotonic. But that's a different timescale.


In what way are leap seconds non-monotonic in UTC?


Datetime storage would consist of two explicit parts: one free from leap seconds (similar to the raw timestamp you get from a GPS receiver), and description of when leap seconds happen, so that you can transform the leap-second-free timestamp into UTC. Feels like a more robust way in principle to me.


So like a CRDT for time


CRDT = Conflict-free replicated data type, https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...


Hear hear. Computers should store TAI and convert to human time using leap seconds, time zones and DST offsets. I wonder if we could roll the leap second into the timezone and just remove UTC entirely (or rather make UTC an alias for TAI).


> we should be using TAI for all [...] computation

How do you add a full day, if you do not know whether a leap second occured or not?


The same way you add a full year without knowing if a country will change timezones or vote to drop daylight savings: you distinguish between timestamps and calendar dates, and keep updated your timezone/leap second database.


That doesn't answer my question. What I mean is that "a day" is not a fixed length, but depends on the existence of leap seconds. Most days have 246060 seconds, but not all.

Say I have a timestamp in the future, and I want to add a day to this. Then the first approximation would be to add 246060 seconds. However if there occurs a leap second in that day, the correct thing is to add one more second. But I do not necessarily know today, if there will be a leap second. So I cannot compute the correct timestamp.


You have to differentiate between calendrical and chronological calculations.

So to walk through your requested example…

You store a TAI time stamp of when you originally chose a specific date/time and then you store either a chronological time stamp (the TAI value) or a calendrical timestamp which is a struct/dict/etc storing the desired year month day, time and timezone and notionally the calendar system unless you want to assume the Gregorian calendar and risk confusion with Julian calendar dates which are still in use for astronomical record keeping. With these two values you can calculate what to change in the struct to account for any time change, a day not being fixed as a number of seconds allows accurate calendar calculations. You increase the day value and make sure you don’t have to “carry” a month in the months column.

Yes this is more calculations when modifying a value compared to just adding a bunch of seconds but that’s the issue at the heart of this problem, calendars and chronological elapsed seconds are fundamentally different and we computer programmers tried to take a big shortcut by using a single seconds value for both… and thus… here we are with the present state of affairs.


> Storing anything as UTC was a mistake and we should be using TAI

I had to look up TAI. I disagree. UTC exists for a reason. But I am here to tell a war story.

Before I arrived on the scene a customer said they wanted local time in the reports. (As in Wall Clock Time).

The Customer is Always Right. OK?

So the times went into the database as wall clock time.

The designers of the data schema made a simplifying decision to store everything as text.

No time zone went into the text string that described the time. (?? I do not know why. So easy it would have been)

I come along and have to code animations using that time series data.

Very few problems...

As you would expect there are a lot of other problems with that database.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: