Hacker Newsnew | past | comments | ask | show | jobs | submit | zelphirkalt's commentslogin

If everyone or a majority of people sets these options, then I think issues will simply be discovered later. So if other people run into them first, better for us, because then the issues have a chance of being fixed once our acceptable package/version age is reached.

I actually think it is not too bad a design, because seconds are the SI base unit for time. Putting something like "x days" requires additional parsing steps and therefore complexity in the implementation. Either knowing or calculating how many seconds there are in a day can be expected of anyone touching a project or configuration at this level of detail.

Seconds are also unambiguous. Depending on your chosen definition, "X days" may or may not be influenced by leap seconds and DST changes.

I doubt anyone cares about an hour more or less in this context. But if you want multiple implementations to agree talking about seconds on a monotonic timer is a lot simpler


Could you explain what you mean re: ambiguity? I understand why “calendar units” like months are ambiguous, but minutes, hours, days, and weeks all have fixed durations (which is why APIs like Python’s `timedelta` allows them).

The minute between December 31, 2016 23:59 and January 1st 2017 is 61 seconds, not 60 seconds. The hour that contains that minute is 3601 seconds, the day that contains that hour is 43201 seconds, etc. If you assume a fixed duration and simply multiply by 43200, your math will be wrong compared to the rest of the world.

Daylight savings time makes a day take 23 hours or 25 hours. That makes a week take 7254000 seconds or 7261200 seconds. Etc.


That’s what I mean by calendar units. These aren’t issues if you don’t try to apply durations to the “real” calendar.

(This is all in the context of cooldowns, where I’m not convinced the there’s any real ambiguity risk by allowing the user to specify a duration in day or hour units rather than seconds. In that context a day is exactly 24 hours, regardless of what your local savings time rules are.)


"exactly 24 hours" could still be anywhere between 86399 and 86401 seconds, depending on leap seconds. At least if by an hour you mean an interval of 60 minutes, because a minute that contains a leap second will have either 59 or 61 seconds.

You could specify that for the purposes of cooldowns you want "hour" to mean an interval of 3600 seconds. But that you have to specify that should illustrate how ambiguous the concept of an hour is. It's not a useless concept by any means and I far prefer to specify duration in hours and days, but you have to spend a sentence or two on defining which definition of hours and days you are using. Or you don't and just hope nobody cares enough about the exact cooldown duration


If you say "wait 1 day without using a calendar+locale" then the duration is unambiguously 86400s, but if you say "wait 1 day using a calendar+locale" or "wait until this time tomorrow" then the duration is ambiguous until you've incorporated rules like leap/DST. I think GP's point is that "wait 1 day" unambiguously defaults to the former, and you disagree, but perhaps it's a reasonable default.

Yep, this is exactly my point. Durations are abstract spans of "stopwatch time," they don't adhere to local times or anything else we use as humans to make time more useful to us. In that context there's no real ambiguity to using units like hours/days/weeks (but not months, etc.) because they have unambiguous durations.

Leap seconds are their own nightmare. UNIX time ignores them, btw, so that the unix epoch is 86400*number of days since 1/1/1970 + number of seconds since midnight. The behavior at the instance of a leap second is undefined.

Undefined behavior is worse than complicated defined behavior imo.

That's a good way of describing that. It's far too easy to pretend UNIX timestamps would correspond to a stopwatch counting from 1/1/1970.

Right. Currently epoch time is off the stopwatch time by 27 seconds.

In the UK last Sunday was 23 hours long because we switched to BST, and occasionally leap seconds will result in a minute being something other 60 seconds.

No it wasn't. The country instantaneously changed timezones from UTC+0 to UTC+1 (called something else locally), it was no different to any other timezone change from e.g. physically moving into another timezone.

exploiting the ambiguity in date formats by releasing a package during a leap second

I came here to argue the opposite. Expressing it in seconds takes away questions about time zones and DST.

I think you're incorrect to say that second are also ambiguous. Maybe what you mean is that days are more practical, but that seems very much a personal preference.


I understand the [flawed] reasoning behind "x seconds from now is going to be roughly now() + x on this particular system", but how does defining the cooldown from an external timestamp save you from dealing with DST and other time shenanigans? In the end you are comparing two timestamps and that comparison is erroneous without considering time shenanigans

I think you misread the comment you're replying to.

> seconds are the SI base unit for time

True. But seconds are not the base unit for package compromises coming to light. The appropriate unit for that is almost certainly days.


that kind of complexity is always worth it. Every single time. It's user time that you're saving and it also makes config clearer for readers and cuts out on "too many/little zeroes on accident" errors

It's just library for handling time that 98% of the time your app will be using for something else.


I find it best when I need a calculator to understand security settings. 604800 here we come

The GP's criticism as I read it is about paper authors not making it particularly easy to reproduce their findings.

For a long time I have criticized this too, especially for software projects, or papers that deal with machine learning models. If the things described in a paper are not reproducible, then it's basically worthless. Similar to "it works on my machine" in software engineering. Many paper authors are not software engineers, and often neither are they experts in the tooling they should be using to make their research reproducible. If this is a problem for a research team, then please, hire an engineer to ensure reproducible. It doesn't help anyone to remain ignorant towards the reproducibility issue and only shows lack of scientific discipline. Reproducibility should be on the mind of any serious researcher and there should be lectures about how to do it at universities.


Would be good to actually make them pay that bill though.

This hints at something, that in my opinion isn't not discussed enough:

Say some personal data leaked into training data, where can I request surgical deletion of that data from the LLM? Not only license washing is done using LLM, but also PII washing and consent ignoring is done using LLMs. How will a service provider make sure to not ever have personal data in the training data set and fix earlier mistakes pertaining to personal data? Are they not obliged to have a way of deleting one's personal data? GDPR or something?


Well, depends on what you have in those private notes and how others will query the LLMs trained on that private data. Maybe you write things in private notes that are a reason for private notes to remain private.

Probably has happened at some point, but personally, I have not been hit with/experienced downtime of Codeberg yet. The other day however GitHub was down again. I have not used Gitlab for a while, and when I used it, it worked fine, and its CI seems saner than Github's to me, but Gitlab is not the most snappy user experience either.

Well, Codeberg doesn't have all the features I did use of Gitlab, but for my own projects I don't really need them either.


That's good, but it can be read as: "Everyone can be a first time offender and get away with a slap on the wrist." -- where "everyone" is a tech company. Next they will find some other nefarious thing they don't need to check for properly, since that would be a new offense and again only get a wrist slap. There is no signal in this fine, other than "Hey it's OK, if you are big enough, you will get away with it. At least once, likely twice or more, depending on how big you are.".

One general problem or challenge with statically strongly typed languages is, that one can quick get to a local optimum, but that local optimum might lack some flexibility, that is needed later on, only discovered after some usage and seeing many use cases. Then a big refactoring is ahead, possibly even of the core types of the project. If that is allowed and introducing such flexibility thought of, it often happens, that expressing it in types becomes quite complex, which, without a lot of care, will impact the user of the project. The user needs to adhere to the same types and there might then be quite some ceremony around making something of the correct type, to use it with the project.

It is safer, but it is not without its downsides. It demands a careful design to make something people will enjoy using.


Man, a while ago I thought: "It happens often, alright, but every 2 weeks? Sounds like a slight exaggeration." But it really is every 2 weeks, isn't it? If I imagine in a previous job anything production being down every 2 weeks ... phew, would have had to have a few hard talks and course corrections.

i once fixed a site going down several times a year with two t1.micro instances in the same region as the majority of traffic. Instantly solved the problem for what, $20/month?

Another site was constantly getting DDoS by Russians who were made we took down their scams on forums, that had to go through verisign back then, not sure who they're using now. They may have enough aggregate pipe it doesn't matter at this point


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: