Hacker Newsnew | past | comments | ask | show | jobs | submit | nightpool's commentslogin

No, they propose just concatenating it with the data received from the network

> it makes a concatenation of the domain separator (@0x92880d38b74de9fb) and the serialization of the object, and then feeds the byte stream into the signing primitive. Similarly, verification of an object verifies this same reconstructed concatenation against the supplied signature.

> Note that the domain separator does not appear in the eventual serialization (which would waste bytes), since both signer and receiver agree on it via this shared protocol specification. Encrypt, HMAC, and hash work the same way


You are, of course, right. And this distinction is important for this chain of comments.

Though, in fairness, that is /kind of/ like transmitting it---in the sense that it impacts the message that is returned. It's more akin to sending a checksum of the magic number, rather than the magic number itself. But conceptually, that is just an optimization. The desire is for the client to ensure the server is using the same magic number, we just so happen to be able to overload the signature to encode this data without increasing the message size.


Oh, it's just in the hash input. So if you don't use the right ID when you check the hash, it fails.

Yes, this is a trend I've noticed strongly with Claude code—it really struggles to explain why. Especially in PR descriptions, it has a strong bias to just summarize the commits and not explain at all why the PR exists.

The question "why" is always answered with post-hoc rationalizations. This applies to both LLMs and humans.

No, I think a lot of humans can explain why they're adding a new button to the checkout page, or why they're removing a line from the revenue reconciliation job. There's always a reason a change gets made, or else nobody would be working on it at all :)

The minute between December 31, 2016 23:59 and January 1st 2017 is 61 seconds, not 60 seconds. The hour that contains that minute is 3601 seconds, the day that contains that hour is 43201 seconds, etc. If you assume a fixed duration and simply multiply by 43200, your math will be wrong compared to the rest of the world.

Daylight savings time makes a day take 23 hours or 25 hours. That makes a week take 7254000 seconds or 7261200 seconds. Etc.


That’s what I mean by calendar units. These aren’t issues if you don’t try to apply durations to the “real” calendar.

(This is all in the context of cooldowns, where I’m not convinced the there’s any real ambiguity risk by allowing the user to specify a duration in day or hour units rather than seconds. In that context a day is exactly 24 hours, regardless of what your local savings time rules are.)


"exactly 24 hours" could still be anywhere between 86399 and 86401 seconds, depending on leap seconds. At least if by an hour you mean an interval of 60 minutes, because a minute that contains a leap second will have either 59 or 61 seconds.

You could specify that for the purposes of cooldowns you want "hour" to mean an interval of 3600 seconds. But that you have to specify that should illustrate how ambiguous the concept of an hour is. It's not a useless concept by any means and I far prefer to specify duration in hours and days, but you have to spend a sentence or two on defining which definition of hours and days you are using. Or you don't and just hope nobody cares enough about the exact cooldown duration


If you say "wait 1 day without using a calendar+locale" then the duration is unambiguously 86400s, but if you say "wait 1 day using a calendar+locale" or "wait until this time tomorrow" then the duration is ambiguous until you've incorporated rules like leap/DST. I think GP's point is that "wait 1 day" unambiguously defaults to the former, and you disagree, but perhaps it's a reasonable default.

Yep, this is exactly my point. Durations are abstract spans of "stopwatch time," they don't adhere to local times or anything else we use as humans to make time more useful to us. In that context there's no real ambiguity to using units like hours/days/weeks (but not months, etc.) because they have unambiguous durations.

Leap seconds are their own nightmare. UNIX time ignores them, btw, so that the unix epoch is 86400*number of days since 1/1/1970 + number of seconds since midnight. The behavior at the instance of a leap second is undefined.

Undefined behavior is worse than complicated defined behavior imo.

That's a good way of describing that. It's far too easy to pretend UNIX timestamps would correspond to a stopwatch counting from 1/1/1970.

Right. Currently epoch time is off the stopwatch time by 27 seconds.

Presumably because the DOM order of the elements is not the actual order of the lines (you can see this with e.g. the blockquotes), so it would be confusing if the user tried to copy the text and saw that all the lines were jumbled up

It's only the blockquotes that are out of order. If this were a valid reason to disable user selection, then no website with a sidebar would have it enabled. Besides, you could just disable user selection on the blockquotes if that were the reason (not that I'd ever recommend that)

That wasn't my experience, when I resized the window and new text showed up, it was completely out of order

No idea how I'm supposed to read the end of this. But it seems kinda interesting? Not that like, require('fontmetrics') doesn't exist, but it's definitely true that most JS needs more font rendering then the browser seems capable of giving us these days.

I've never had that issue with Github—I think their account mixing setup reduces the amount of work I have to do to sign in 100x compared to other SSO systems I use.

You must have used some weird other SSO systems is the only explanation I have.

GitHub has all the normal SSO stuff as anything else we use, but on top of the GitHub-specific account login. Everywhere else I just log in via SSO, in GitHub I log in first to GitHub (with its own MFA) and then the same SSO step as anywhere else.


I've never had to log in to Github as part of my daily flow. Only once to set up a new computer. Are you logging in using an incognito window or something?

Interesting. Perhaps it's because I'm not using GitHub daily, we're migrating to GitHub so I still do work in repos which live in the old system. Also, perhaps I'm more affected because I'm doing org admin stuff as well.

No, they're not useless at all. The point of shortening certificate periods is that companies complain when they have to put customers on revocation lists, because their customers need ~2 years to update a certificate. If CRLs were useless, nobody would complain about being put on them. If you follow the revocation tickets in ca-compliance bugzilla, this is the norm—not the exception. Nobody wants to revoke certificates because it will break all of their customers. Shortening the validity period means that CAs and users are more prepared for revocation events.

... what are the revocation tickets about then? how is it even a question whether to put a cert on the CRL? either the customer wants to or the key has been compromised? (in which case the customer should also want to have it revoked ASAP, no?)

can you elaborate on this a bit? thank you!


> what are the revocation tickets about then

Usually, technical details. Think: a cert issued with a validity of exactly 1000 days to the second when the rules say the validity should be less than 1000 days. Or, a cert where the state name field contains its abbreviation rather than the full name. The WebPKI community is rather strict about this: if it doesn't follow the rules, it's an invalid cert, and it MUST be revoked. No "good enough" or "no real harm done, we'll revoke it in three weeks when convenient".

> either the customer wants to or the key has been compromised

The CA wants to revoke, because not doing so risks them being removed from the root trust stores. The customer doesn't want to revoke, because to them the renewal process is a massive inconvenience and there's no real risk of compromise.

This results in CAs being very hesitant to revoke because major enterprise / government customers are threatening to sue and/or leave if they revoke on the required timeline. This in turn shows the WebPKI community that CAs are fundamentally unable to deal with mass revocation events, which means they can't trust that CAs will be able to handle a genuinely harmful compromise properly.

By forcing an industry-wide short cert validity you are forcing large organizations to also automate their cert renewal, which means they no longer pose a threat during mass revocation events. No use threatening your current CA when all of its competitors will treat you exactly the same...


From my experience the biggest complaints/howlings are when the signing key is compromised; e.g., your cert is valid and fine, but the authority screwed up and so they had to revoke all certs signed with their key because that leaked.

E.g., collateral damage.


Sure, happy to. The average revocation ticket is something like https://bugzilla.mozilla.org/show_bug.cgi?id=1892419 or https://bugzilla.mozilla.org/show_bug.cgi?id=1624527. The CA shipped some kind of bug leading to noncompliance with baseline requirements. This could be anything from e.g. not validating the email address properly, inappropriately using a third-party resolver to fetch DNS names, or including some kind of extra flag set that they weren't supposed to have set. The CA doesn't want to revoke these certificates, because that would cause customers to complain:

    In response to this incident of mistaken issuance, the verification targets are all government units and government agency websites. We have assessed that the cause of this mis-issuance does not involve a key issue, but only a certificate field issue, which will not affect the customer's information security. In addition, in accordance with the administrative efficiency of government agencies, from notification to the start of processing, it requires agency supervisors at all levels. Signing and approval, and some public agencies need to find information vendors for processing, so it is difficult to complete the replacement within 5 days. Therefore, the certificate is postponed and revoked within a time limit so that the certificates of all websites can be updated smoothly.
[...]

    In this project we plan to initially issue new certificates using the same keys for users to install, and then revoke the old certificates. As these are official government websites, and considering the pressure from government agencies and public opinion, we cannot immediately revoke all certificates without compromising security. Doing so would quickly become news, and we would face further censure from government authorities.

The browsers want them to revoke the certificates immediately, because they rely on CAs to agree to the written requirements of the policy. If you issue certificates, you must validate them in precisely this way, and you must generate certificates with precisely these requirements. The CAs agree in their policies to revoke within 24 hours (serious) or 120 hours (less serious) any certificates issued that violate policy.

And yet when push comes to shove, certificates don't actually get revoked. Everybody has critical clients who pay them $$$$$ and no CAs actually want to make those clients mad. Browsers very rarely revoke certificates themselves, and realistically their only lever is to trust or distrust a CA—they need to rely on the CA to be truthful and manage their own certificates properly. They don't know exactly all of which certificates would be subject to an incident, they don't want to punish CAs for disclosing info publicly, etc. So instead, they push for systematic changes that will make it easier for CAs to revoke certificates in the future, including ACME ARI and shorter certificate lifetimes.


Thank you for the detailed answer!

It's unacceptable that operational matters are not handled by a standing ops team that covers so and so gov agencies.

(And at least Apple did something useful by pushing for shorter validity time for certs.)


Yes, everyone in the WebPKI community is pushing for shorter validity lifetime. But as you can see in the parent thread here ("Which is yet another chore. And it doesn’t add any security"), everybody is mad that browsers are pushing for shorter certificate lifetimes.

How would you handle validating numeric input in a hot path then? All of the solutions proposed in #5 are incomplete or broken, and it stems from the fact that Java's language design over-uses exceptions for error handling in places where an optional value would be much safer and faster.

Normally in 100% cases, with parseInt/parseDouble etc. Getting NumberFormatException so frequently on a hot path that it impacts performance means, that you aren’t solving the parsing number problem, you are solving a guessing type problem, which is out of scope for standard library and requires custom parser.

Okay, but this contradicts your original statement that "Java doesn't steer anyone to use these [footguns]". Every language has a way to parse integers, and most developers do not need a custom parser. Only in Java does that suddenly become a performance footgun.

It does not. If you need to parse a number, you use standard library and you will be fine. The described case with huge impact on hot path is the demonstration why using brains is important. The developer that will get into this mess is the one who will find the way to suffocate his code with performance bottlenecks in thousand other ways. It’s not a language or library problem.

Yes, parseInt et al work very fast for good inputs. What percentage of your inputs are invalid numbers and why ?

> What percentage of your inputs are invalid numbers and why ?

This is a wrong question to ask in this context. The right question to ask is when actually exceptional flow becomes a performance bottleneck. Because, obviously, in a desktop or even in a server app validating single user input even 99% of wrong inputs won’t cause any trouble. It may become a problem with bulk processing, but then, and I have to repeat myself here, it is no longer a number parsing problem, it’s a problem of not understanding what your input is.


> Java's language design over-uses exceptions for error handling

No, library authors' design over-uses exceptions. Also refer to people using exceptions to generate 404 http responses in web systems - hey, there's an easy DDOS... This can include some of Java's standard libraries, although nothing springs to mind.

Exceptions are not meant for mainstream happy-path execution; they mean that something is broken. Countless times I have had to deal garbage-infested logs where one programmer is using exceptions for rudimentary validation and another is dumping the resulting stack traces left and right as if the world is coming to end.

It is a problem, but it's an abuse problem, not a standard usage problem.


I agree with you that the root problem is that the library author's design over-uses exception. But when the library in question is the standard library and the operation is as basic as Integer.parseInt, then I think it's fair to criticize that as a language issue, because the standard library sets the standard for what is idiomatic + performant for a language.

There is nothing wrong with Integer.parseInt(). It blows up if you give it invalid input. That's standard idiomatic behavior.

It might be helpful to have Integer.validateInt(String), but currently it's up to author to do that themselves.


Yes, ADB disables the 1-day period.

How do you know this? It's been confirmed that you can use adb to temporarily bypass verification on a per-app basis, yes, but from what I can see, there's no indication that sideloading one app over adb will also skip the 1-day period.

This matters if you're sideloading an app store like F-Droid, because sideloaded app stores still have to go through PackageInstaller [1], which probably still enforces verification checks for adb-sideloaded apps?

[1] https://developer.android.com/reference/android/content/pm/P...


You're thinking of the New Yorker, not the New York Times.

cries in west coaster

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: