Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I posted this link and I named it the way I did to draw attention to this in context of CSAM enforcement... this man could have easily uploaded any photos to these hacked iCloud accounts, which would've been synced down to end user devices.

Apple didn't catch on to this, despite him not using VPN or Tor... it wasn't until the FBI investigated a public figure's hacked and posted photos that this came to light.

[EDIT]: Not the FBI, but a private company noticed this (h/t codeecan)



Comments like are so bizarre to me.

Google, Microsoft etc we know for a fact do server side scanning of photos for CSAM. Apple should be assumed to do the same.

So what exactly is the difference if this is done client or server side. The person being hacked would still be investigated by the FBI.


The problem with the US statute for CSAM is that possession is illegal, not just intentional creation/collection/distribution. The person being hacked has technically broken the law, even if they don’t get prosecuted.

I don’t know how often unintentional possessors are prosecuted, but the US system of prosecution makes it easy for an innocent to get railroaded by threats of massive charges and comparatively leanient plea deals, combined with punitive sentencing for those who reject the plea bargain. Think Aaron Schwartz, but without any intent to violate the law.

> The person being hacked would still be investigated by the FBI

As someone with family in the FBI (one on a relevant team) and a local LEO that was deputized to do this work for the US Marshals, that doesn’t reassure me. The best forensics employees in the FBI with enough resources can identify that there was a hack and that the account owner is innocent. We live in a world of scarcity where that much effort is not always invested.

I think the client-side versus server side is more about relative trade offs of who owns the client device (and what “ownership” means) and whether the equivalent server side search is technologically feasible (might not be if the client encrypts with a key only the client owns, as some have speculated about Apple’s future plans).


What strict-liability statute are you referring to?


This HN comment suggests that you are right that there is no strict-liability statute for CSAM: https://news.ycombinator.com/item?id=28235669

IANAL so I am very likely wrong.

US Code 18 Sec 2252 seems to state that possessing or looking at CSAM material requires that the action is done “knowingly”.


Well Apple differentiates themselves on privacy. I would prefer to do business with a company that never looks at my data for any reason. The problem with on-device scanning is the implicit backdoor.


If Apple were to do what many recommend and do CSAM scanning in the cloud like other providers, would that change this attack vector?


It's only an attack vector in the minds of people who haven't given it more than 10 seconds of thought.

Apple knows the sync dates of all of the photos that are uploaded. So unless someone has hacked your account and has been directly trickle feeding CSAM for years (without you noticing) then it's going to look suspicious. A big dump of lots of CSAM at one particular timestamp is a pretty easy thing to spot.

And then in this case they aren't hacking the phone but the account which means Apple is going to notice a set of photos coming from an IP address they haven't seen used from that account before.


Do you think that Apple is going to decide whether a big dump of CSAM was uploaded by that user or a hacker and act differently based on that investigation, or just send it to LEO and let them sort it out?

Seems like there could be some legal ramifications from the choice to bypass law enforcement under certain circumstances


Depends on if they think the public will buy their claim of "we just let law enforcement sort it out." If they think the public will blame them for the false accusation, they are incentivized to avoid letting it happen.


> A big dump of lots of CSAM at one particular timestamp is a pretty easy thing to spot.

Only if that system / heuristic has been built. The same could have been said about Apple’s systems for identifying bulk account hijacks, but Apple didn’t, which I suppose is the value of this story.

And companies aren’t allowed to Just inspect content once they identify CSAM. It is kryptonite for criminal liability. Companies are required to turn it over to the feds quickly and to try not to disturb metadata.

I suspect your line of thought would work given full ability to inspect (and some assumptions about what an IP change actually proves), but in practice Apple still hasn’t gotten the basics around account hijacks/fraud sorted out, so I’m hesitant to cheer them on as they try to quickly jump into the deep screaming “think of the children!”.


Are they jumping into the dark or tossing their users over the edge and listening for a splash? Or maybe a splat.


This comment assumes that Apple does a lot of heavy lifting to exonerate individuals who are found with CSAM beyond just reporting them to law enforcement.

Of course metadata could exonerate someone who is a victim in a case like this. The question is will it ever see the light of day?


The negative PR from a false accusation would be expensive. On top of the judgement itself, and you know that Apple has deep enough pockets that someone will be looking for a big score.


"A lot of heavy lifting"

Also known as a 20 line script which checks the last modified date for a bunch of recently uploaded files and validates the IP address against the recently known list.


The code to extract metadata is easy. I’m talking more about whether or not there is a deliberate process in place to actually write the code, run the checks and provide all available metadata and context to law enforcement. Apple has not indicated that process exists, thus far.


Not giving it 10 seconds of thought seems common in most HN reactions to the whole CSAM thing.


no

Edit: No if they use the same algorithm, but they could use other algorithm which are less abusable and no one would know the hashes in the database, so Yes I guess?


Scary indeed, slight correction, not the FBI [initially];

> A California company that specializes in removing celebrity photos from the internet notified an unnamed public figure ...

He was caught by random chance of this company.


If he was specifically going after famous women's accounts, I don't think it was so random, given that he went after hundreds of people and didn't cover his tracks at all. He was after celebrity photos, he was sloppy, people who try to defend against such attacks were going to catch him.


We've seen more decentralized and sophisticated attacks of the same type against iCloud ("the fappening" etc.) which were kept mostly private for years before being made public.

The fact that those hacks quickly were flushed from the news cycle without a bunch of public lawsuits etc. makes me suspect Apple very proactively went out and made settlements with the more high profile victims of those hacks. Of course, I have no proof of this at all, so it's purely speculation, but it was odd to see almost nothing come out of those hacks.


> without a bunch of public lawsuits

Apple is not at fault here though.

These people have clicked on a phishing email no different to a banking or retail one.


The fact Apple missed logins to hundreds of accounts over time from a single ip registered probably to Spectrum or Verizon ISP is a little suspect. Then again, there are probably public ips with a nat with thousands of iphones behind it at times. This might be a really hard one to detect even though it's sloppy.


Companies regularly NAT many thousands of users behind a single public IP. Additionally non-profits, schools, and others often provide WiFi for their guests/students using a supposedly residential internet account or their ISP doesn't segment basic business IPs from residentials.

In any case flagging multiple accounts logging in from a single public IP is not as useful a signal as you might think.


Given that the accused was arrested in 2007 for similar sex crimes while a Geek Squad employee, one must imagine that he’s been up to this for years.


Apple itself is currently obsoleting IP-based account theft heuristics with their iCloud VPN, so they might have stopped relying on it internally already :)


> I named it the way I did to draw attention to this in context of CSAM enforcement...

From the site guidelines:

> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.

Just a reminder because if a mod ends up viewing this they will probably change the title back to the original.



Absolutely: https://twitter.com/matthew_d_green/status/14299837034602045...

I assume each upload is tagged with device ID which first uploaded it etc. but maybe that can be spoofed as well?


There's no real reason to assume this is true, because Apple's systems didn't detect hundreds of accounts being accessed from a single, home IP...


This Twitter account continues to debase discourse about the child safety proposals with FUD. It posted incorrect information about the proposal before launch and has continued with useless speculation. How many of the hypothesized threat models which don’t pan out has he formally redacted?

If you are worried about the security of iCloud, then that can be read as more reason to prefer client side scanning. Of course the tweets are ambiguous about logical implications so you can’t engage with them directly.


And I could say that this HN account has been baselessly dismissing valid concerns about the proposal and providing non sequiturs to assert why nobody should be concerned since it was announced.

However, stating my opinion as fact in an attempt to invalidate someone else's perspective on the matter would be debasing discourse so I wouldn't do that. None of us should.


Are you asking me to provide evidence that the account posted false information and never redacted it? How about the very tweets in the linked thread where the account makes fact-free claims about how a “single IP” accessing “hundreds of accounts” (the former of which is not substantiated) suggest iCloud security is fundamentally broken. Of course Matt is smart enough to not state the implication directly, relying instead on sarcasm and FUD.

Since you went ahead and stated opinion as fact (while cleverly pretending that you didn’t), can you provide an example where I dismissed a valid concern with a non sequitur? How do you reconcile the accusation that I assert “nobody should be concerned” with comments like this where I clearly outlined why the announcement should be concerning:

[1] https://news.ycombinator.com/item?id=28279776

[2] https://news.ycombinator.com/item?id=28165116

I’ll go further and say that I have sincere concerns with what was announced, but seeing how that Twitter account seeds legions of incorrect commenters who proliferate (and post intentionally clickbait material on HN, as the poster of this article themselves admitted on this very thread!) led me to the conclusion that Matt is doing plenty of harm, especially since he should know better.


We'll call it "The Trappening".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: