Not yet but it seems like they've finally started working towards that. Driver compatibility has improved dramatically over the last few years, for one.
> Any "verification" means unacceptable privacy violations.
So I'm not necessarily arguing for age controls here, but purely on a technical level what do you think of schemes like Verifiable Credentials, which delegate verification to third parties that have already established your identity?
In theory you can set up a system that works like this:
1. User goes to restricted site and sets up an account
2. Site forwards them on to a verification service with a request "IsOver18?"
3. User selects their bank from a dropdown on the broker site
4. Broker forwards them to the bank, with a request "IsOver18?"
5. User logs in and selects "Sure, prove I am over 18 to this request"
6. Bank sends a signed response to the broker "Yep"
7. Broker verifies and sends its own signed response to the site "Yep"
8. The site tags the account as "Over 18 Status verified"
In this situation, the restricted site doesn't get anything other than a boolean answer from the broker. The broker can link a request to a given bank but doesn't get anything that gives away your identity. The bank knows your identity and that it has approved a request, but not necessarily where the request came from.
Verification broker tracks sites which make requests and records it attached to personal data. Site either sells or leaks personal data along with history of all sites visited which require age verification.
Also your solution requires a bank account, not something everyone has. Many do, but not all. Also the bank may not know "which" site you are visiting, but it does now know you are visiting sites which require age verification and how often.
> Verification broker tracks sites which make requests and records it attached to personal data.
How? What personal data?
The broker doesn't get anything other than "Site X wants to verify over 18, the user selected forward to Bank Y" and "Bank Y responds with TRUE"
> Also your solution requires a bank account, not something everyone has
True. Banks are only one example of an already trusted identity provider in this situation. But I get that there are gaps.
> Also the bank may not know "which" site you are visiting, but it does now know you are visiting sites which require age verification and how often.
Verification need only happen once per site, when setting up an account. This does introduce the possibility of a secondary market for approved accounts though, sure.
User installs a browser extension which forwards the request to everyoneisover18.com, owner of that site has a script set up to log into their bank and pass the verification challenge
Restricted-site.com gets the signed response from the broker, not the bank. In your situation there's not any need for "everyoneisover18.com" to defer to a real bank for a faked response as it signs things itself.
But restricted-site.com doesn't trust everyoneisover18.com's key, it only trusts realbroker.com's key, so the response isn't accepted. If it is found to trust fake brokers like that it gets in trouble with the law.
That's why everyoneisover18.com forwards the request to my bank or my broker and gets my signature on the behalf of literally anyone. I may charge them $5 for this service.
> That's why everyoneisover18.com forwards the request to my bank or my broker
Doesn't work. The response won't be signed by real-broker.com.
The permission request/response itself goes direct from the server at restricted-site.com to the server at real-broker.com over TLS, so you can't MITM it, it's not controlled by the client and you won't be able to just pass out a cached response.
Your malicious client plugin could potentially forward the client session details to you, so you could operate the broker page, then log in to your bank's portal and approve that request, but I don't think that's going to scale very well and I imagine your bank is likely going to rate limit you.
real-broker opens a web page allowing them to verify somehow. The browser extension sends me their URL and cookies so I can load the same page and verify myself. All automated of course.
You could, you could also go to their house and go through the process for them, but in either case I don't think it's going to scale very well (rate-limiting would seem to be called for, maybe with 2FA as well, to mitigate this sort of thing and remove the possibilities for automation).
But sure, you could subvert it on a small scale, just as you can borrow someone else's driving license to register in 'normal' systems already. You could also register an account, validate it and then sell the login details, regardless of what proof of age scheme you use.
The point is the scheme is no worse at validation than asking for ID and it protects user privacy by keeping all ID details away from individual websites, which is the more important part IMHO.
My cellphone provider will be pleased be paid to deliver all those 2FA text messages. Who's sending them? How are they getting paid? Maybe I'm actually my own phone company, so I get paid for delivering them to myself.
Your bank, like they have 2FA for every other access to your account. 2FA also doesn't need to be via SMS, and even when it is that's dirt cheap. Rate limits can be a couple of approvals per hour with daily limits of a small handful. Or a leaky-bucket style algortihm where you can do a few at a time, but you only get one more per hour. Whatever way it's done it precludes your large-scale automation attempt.
I tire of this now. We've entirely wandered off from "Here's a way to prove age without the privacy implications, that works just as well as handing over scans of ID"
Your bank would likely have a limit on the number of approvals it would issue over time, to stop automated exploits, sure. In theory you only need these approvals once per site on signup.
We are pre-supposing for the sake of this thread that proving you are over 18 is desirable, but that giving your ID to unknown third parties is not.
That being the case, having a rate-limit on site approvals would appear to be a relatively reasonable tradeoff to stop the system being exploited for gain by third parties like the commenter upthread.
If you don't want any of that in the first place, cool, but I'm not making an argument for it here, just saying that a system that meets these two requirements is possible.
> There's no way to prove someone is some age without presenting a legal ID.
Sure there is.
Verifiable Credentials and other similar standards allow this to be delegated in such a way that there is no need to present ID or even let the site know who you are. The site can issue a request to a third party that simply provides back "Yep, we attest that this request was approved by someone over 18".
Depending on the exact scheme, the request may forward you to a broker, who will then forward the request (and your web session) on to the trusted third party of your choice which has already performed ID verficiation (usually a bank). The bank sends a signed response back to the broker, the broker sends a signed response back to the requesting site.
Is it perfect? Maybe not 100%, the broker knows there was a request from a restricted site forwarded to a given bank. The bank knows you have approved a request. There is likely to be an identifier of some sort sent from the site all the way through to the back-end so you know you're not being MITM'd. But in theory nobody should have the full picture.
No practical way I should say. Realistically, it's pretty clear that lawmakers really just want to shove it through in the simplest way possible. Which is probably private third parties.
And private third parties are very shady. They have effective monopolies and no significant public face to care about. I think we have seen this pattern play out in healthcare, compliance and other industries already.
Also idk about banks being the effective gatekeepers to the internet and eventually all technology. Just feels like its not their place to do that.
> The reason is that this whole push for age verification is nothing to do with actually stopping kids seeing the content.
The reason that mainstream politicians are pushing is because the public wants something done to protect their kids.
Are there likely to be bad actors pushing for it for nefarious reasons as well? Sure.
Are the 'solutions' inadequate and often tech- and privacy-illiterate? Absolutely.
Is the entire impulse to demand that government 'fix' this issue wrong? Maybe.
But the idea that this is all a smoke-screen from top to bottom needs to die. Not just because it's wrong, but because it's also unhelpful. If you wade into the debate saying "It's all a lie, this was never about the kids!" you're easily dismissed as a nut and an absolutist who doesn't appreciate that real people want their real kids to be protected.
Yep, and the tech companies had years to address these concerns and did not, so now the creaky gears of government regulation are turning. They (meaning YOU, a lot of tech company employees who are now outraged about this) could have headed this off years ago and provided a solution on their own terms.
So, why are those "real people" actually not willing to do their job? I am so pissed with parents who think the government is supposed to solve their own inability to raise a child.
Well for a start not all of them are very tech savvy, and we've built a world in which tech is essential to their day to day lives, including for their kids.
If school demands the kids have a variety of devices to do their work, and they have no idea how to lock those down to exclude (for example) social media services that we know have been designed to be as addictive as possible, can you not see why they might want someone to intervene?
(edit: Beyond that there are also tons of bad reasons, I'm not going to try and justify them. There are a lot of bad parents and just in general people who are not firing on all cylinders out there. And many of them absolutely love a government regulation to be brought in for just about anything.
We can and should argue with these people and point out why they're wrong. But saying it's "nothing to do with actually stopping kids seeing the content" fails here too.)
Right. I submit we are solving the wrong problem. Just establishing age vertification doesn't magically make these vast amounts of bad parents good parents. There is a ton of other things they can and will fail at, which their kids have to absorb. If we really cared about those kids, we'd have to reconsider a lot of things. And I know what I am talking about, had to grow up with an undiagnosed ADHS+anciety mother. It was hell. And even 30 years after i moved out, she still can't see what she failed at and continues to fail at. Age verification wouldn't have helped me. MAKING her seek treatment might have helped.
No argument here, I'm not saying they're right to demand that age verification is brought in to protect kids, or that we should give up privacy etc etc.
But coming at it from the angle that "It was never about protecting kids!" is itself incorrect and unhelpful to the debate.
It can be true that kids need to be protected, this (or some variation of it) is a good way to protect kids, therefore it's going to pass, and nefarious interests found a way to insert themselves into the process and piggyback off the efforts to increase real protection of real kids in order to also spy on the kids.
If you want to reject the nefarious actors you have to separate them from the other goals that are reasonable and sorely needed. If you treat it as a whole package, you'll fail because those other goals are too important not to try to achieve, and the package is going to get passed. If you separate them, we can advocate for the pretty sensible California-style law where it's a flag on your user account that root can change, instead of the utterly insane New-York-style law where you have to scan your face every time you open your phone.
If public school is supposed to be free, the school should supply the required devices and take on the burden of securing those devices.
For private schools, the parents are more involved in the first place, but I would expect them to also have guidance for parents to help the less tech savvy among them.
We expect every other consumer product/toy that kids are intended to use to be safe by default. This is like asking why parents shouldn't be responsible for testing all their kids toys for lead paint.
Yet when it comes to internet/social media technology, it's suddenly a parenting failure if they don't pre-vet every platform and website and device before allowing their kids to use it.
As a society, we collectively protect kids from stuff they aren't ready to handle. We don't let them gamble, or buy alcohol, cigarettes, or porn. For the most part, everyone buys in to this and parents can pretty much count on it. Are there exceptions, sure but they create scandals and consequences when they are discovered.
But social media and content platforms didn't feel that they had any social obligations. They did not honor this societal convention to keep inappropriate content away from kids. And the top people at these companies actually don't let their own kids use the platforms, they know how harmful they are and they know about all the addictive hooks and dark patterns of engagement that are baked into them.
We don't just assume every book and movie and telephone call are intended to be safe for kids by default. Why should we expect the internet to be like that?
The public largely wants whatever the media tells them to want and the media in turn tells people to want whatever the same bad actors want them to want.
> Almost no one thinks their code is copyrightable
I think this is an unusual opinion.
Code may not be copyrightable in as small chunks as you put there, but in terms of larger pieces I think companies and individuals very often labour under the belief that code is intellectual property under copyright law.
If code isn't copyrightable, from where comes the GPL?
And why does anyone care if (for instance) some Microsoft code might have accidentally ended up in ReactOS, causing that project to need to go into a locked-down review mode for months or years? For that matter why do employers assert that they own the copyright in contracts?
I think it's the opposite - almost everyone thinks their code is copyrightable, outside of APIs and interop stuff, or things so simple as to be trivial.
I guess it tracks with personal experience. I find Paracetamol is OK for fevers/generic cold symptoms but absolutely useless for a headache, Ibuprofen is the only thing that shifts them.
Well it's the only thing that shifts them now I'm in a country where I can't buy soluble aspirin and codeine OTC.
You can still buy 100 packs, they are just behind the counter at chemists. TBH it's a rather stupid restriction - do they think people only ever own 1 packet of paracetamol at a time? In my household we have at least half a dozen, including a 100-pack from Oz and a 500-pack from America.
Oh right - that's probably what we did, buy a big pack from behind the counter.
I don't think you can even do that in the UK.
Yeah we usually have a few packs hanging around, and I get the 'it seems stupid' thing, but sometimes just adding a tiny bit of friction when someone's trying to kill themselves might save a life. I dunno, I hope that's shown in the evidence anyway. Otherwise it's just pointless like the whole pseudoephedrine song and dance, which has inconvenienced anyone looking for a decongestant while doing sweet FA to the availability of meth.
> Oh right - that's probably what we did, buy a big pack from behind the counter.
No, when you visited they were still on the shelf. They only put them behind the counter in 2025.
> sometimes just adding a tiny bit of friction when someone's trying to kill themselves might save a life
I'm philosophically not for making suicide harder. If someone wants to die, that's their right. And practically, while you might be able to show a stat-sig decrease in paracetamol poisoning, I'd expect the suicides to largely just move to other methods.
The point is that many don't really want to. Those that actually want to can buy two boxes from two shops or ask the pharmacist for the big pack from behind the counter.
This just adds a tiny amount of friction to impulsive attempts, which may be a classic cry for help or just someone in the depths of some sort of mental health episode. Such folks may think better of it the next day and a very small amount of inconvenience will put them off. I think suicide is serious enough that you should probably mean it, and societally saying 'think twice about this' is a good thing.
On the idea that it just shift deaths, as your sibling poster points out (from the UK) -
"in the 11 years following the legislation there were an estimated 765 fewer suicide and open verdict deaths from paracetamol poisoning, which represented a reduction of 43% [...] This reduction was largely unaltered after controlling for a downward trend in deaths involving other methods of poisoning and also suicides by all methods."
So it looks like this tiny, tiny barrier does actually deter people. And that definitely points to them not really being sold on it in any rational way.
I just don't buy the paternalism. People have free will, if they want to do something they would regret later, it's still their right.
That quote doesn't say what you think it means. It's not talking at all about whether suicides shifted to other methods; it only says that there was a secular decline in poisonings (-32%) and suicides in general (-10%) during the study, so they have to also discount some of the raw 48% drop in paracetamol as being part of that broader trend and not due to the treatment. They come to the 43% number only with a generous assumption that had the law not gone into effect, there would have been an increasing trend in deaths from paracetamol poisoning, which seems wrong to me. The more obvious way to derive the prior would be to look at non-paracetamol poisonings and expect the same trend, in which case the effect might be something like -24%.
Anyhow, it's still perfectly possible that the people who were deterred from paracetamol poisoning committed suicide some other way; the data in that paper says nothing about it.
> People have free will, if they want to do something they would regret later, it's still their right.
Then this minor frictional measure is the very least of your worries. For a start, any given pharmacy has an entire pharmacopoea of compounds that people are kept away from for their own good. Not to mention liquor licensing rules making landlords cut folks off at a bar if visibly drunk etc. And guard rails to stop people climbing to high places. And ... preventing people from doing stupid shit in the moment is everywhere in our societies.
There are a heck of a lot of things I'd put higher up my list of concerns than "may have to visit two shops if wanting to kill myself"
Paraphrasing from [0], after September 1998 when the restriction was introduced, "The annual number of deaths from paracetamol poisoning decreased by 21% [...] the number from salicylates decreased by 48% [...] Liver transplant rates after paracetamol poisoning decreased by 66% [...] The rate of non-fatal self poisoning with paracetamol in any form decreased by 11%"
See also [1]: "in the 11 years following the legislation there were an estimated 765 fewer suicide and open verdict deaths from paracetamol poisoning, which represented a reduction of 43% [...] This reduction was largely unaltered after controlling for a downward trend in deaths involving other methods of poisoning and also suicides by all methods."
> I get that it is annoying that crypto facilitates cybercrime, but that is the cost of privacy for everyone.
Who decided this, was 'everyone' consulted on what they'd rather have? Because it seems to me like cyber-criminals and a handful of idealists got what they wanted, and everyone else can suck it...
Who decided you can post this? Was 'everyone' consulted on what they'd rather you post? Because it seems like you and a handful of politicians posted ideas you wanted and everyone else can suck it...
I hope you see the absurdity of your 'everyone' claim.
I see the absurdity of claiming that something (in this case absolute financial privacy) is for the benefit of everyone and worth the costs without taking into account whether the tradeoff does actually benefit everyone, or if the price is something that most people, let alone everyone, would be willing to pay.
Because claiming cybercrime is a price that is worth it for everyone to have this privacy comes across a lot like Trump saying "Don't expect the US to fight your wars for you any more, you're welcome, ingrates" while waging an unnecessary war nobody else wanted.
Crypto makes cybercrime pay, without it collection would be almost impossible. The post I'm responding to argues that it is worth it. I disagree and think it's presumptuous to claim it has anything of a net benefit for society.
The idea that it doesn't make whole categories of crime profitable and therefore attractive, or that the impact is negligible, is not really supportable in a world with rampant cryptolockers and other crypto-currency enabled extortion.
Further, the appeal of this sort of financial privacy for non-criminal use is pretty limited. But you know all this, the alleged privacy benefits have been a talking point for many years now but in the end there's no real legit crypto use cases and still no real interest in crypto beyond crime and gambling.
I'll add some numbers to back up crypto—it is built on trustless numbers, unlike your fiat, after all.
Chainalysis's most recent report puts "illegal" activity at under 1% of total crypto transaction volume[1]. UNODC[2] estimates "The estimated amount of money laundered globally in one year is 2 - 5% of global GDP, or $800 billion - $2 trillion in current US dollars. Due to the clandestine nature of money-laundering, it is however difficult to estimate the total amount of money that goes through the laundering cycle."
HSBC[3], TD[4], Cred Suisse[5], and others have each been moving cartel, sanctioned, or Iranian money in sums that dwarf every ransomware payment ever made combined. If enabling "crime" disqualifies a payment method, then fiat loses in that comparison by more than an order of magnitude.
>Crypto makes cybercrime pay, without it collection would be almost impossible.
Ransomware predates Bitcoin by two decades. The AIDS Trojan in 1989 demanded a cashier's check to Panama. Pre-Bitcoin lockers like Reveton and Winlock collected via MoneyPak, Ukash, Paysafecard, and wire transfers.
>Further, the appeal of this sort of financial privacy for non-criminal use is pretty limited.
Alexei Navalny's Anti-Corruption Foundation, which accepted crypto after Russia froze its banking. The Ukrainian government, which received over $100M in crypto donations in the first weeks of the 2022 invasion. WikiLeaks, after Visa/MC/PayPal blockaded it in 2010 with no court order. Nigerian #EndSARS protesters, whose bank accounts were frozen. Iranian, Argentine, Lebanese, and Venezuelan savers watching double-digit monthly inflation destroy their hard earned wages. Migrant workers send remittances home for ~1% instead of Western Union's 7–10%. Here[6] is a list of hundreds of Non-profits that accept Monero—because people want to be able to donate, privately. The FSF received[ a total of 900,000 USD in Monero donations in two large contributions just in this past year. GrapheneOS, which has employees across many continents, pays all but one of it's 10+ developers in cryptocurrency.
>no real interest in crypto beyond crime and gambling.
Besides pushing back on the idea that, "crime", without a specific definition of what is happening, is bad, XMRBazaar hosts over 8000 legal, trustless craigslist-style listings[7]. Eggs, real estate, italian meats. I'm shivering in my boots at all this Crime[1].
Some of your response is disingenous. Ransomware has been around a long time, certainly, but adding cryptocurrency payrails has made it far more prevalent, those other methods are much harder and riskier to execute well.
Other parts of your response are irrelevant and nothing to do with cryptocurrency. Because bad laws exist with regards to abortion and sexuality, we should disregard all laws, is that your argument?
And searching for people's experiences on XMR Bazaar is hilarious - "When looking at the platform's homepage statistics and browsing other listings, it seems like the vast majority of offers never see a single trade, even the interesting ones with Escrow enabled. There appear to be significantly more listings than actual completed orders."
So I'm not sure that's an argument for there being general interest in day to day use of cryptocurrency.
Lots of your other claims are easily dismissed - migrant workers are still not really using crypto for remittances, it's estimated at under 3% of the market, and is also rife with predictable problems - https://www.austrac.gov.au/news-and-media/media-release/aust...
There are some interesting edge cases in there, when it comes to Ukraine, certainly. But in general we appear to have the same handful of enthusiasts doing niche things, no general interest, and a lot of dodgy shit.
Fundamentally, I wonder what the rest of the on-chain transaction volume can be, because cryptocurrency has failed to go mainstream as a payment service over the course of 17 years now. Investment/speculation springs to mind as an obvious candidate, so we're effectively back to gambling by proxy at that point.
Does anyone use ReactOS in a production-like fashion?
reply