Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>18 U.S.C. § 2258A is specific: the data can only be sent to NCMEC. (With 2258A, it is illegal for a service provider to turn over CP photos to the police or the FBI; you can only send it to NCMEC. Then NCMEC will contact the police or FBI.) What Apple has detailed is the intentional distribution (to Apple), collection (at Apple), and access (viewing at Apple) of material that they strongly have reason to believe is CSAM. As it was explained to me by my attorney, that is a felony.

I'm not sure, after reading the article, who is/has the most insane system of Apple or NCMEC.



This is the part that also caught my eye.

Surely Apple's lawyers have also reviewed the same law, and if it's that clearly defined, how did they justify/explain their approach?


Because Apple (its employees) aren't actually viewing the images, nor transmitting them. They mention somewhere that it's a low res proxy of the image, or something similar.


But wouldn't a low res image of CP be still classified as CP ? I guess the manual verification will ve done as a joint venture by apple and the authorities


> I guess the manual verification will ve done as a joint venture by apple and the authorities

Do we know this for sure? And even if it is, this is still apparently illegal under current law.


> They mention somewhere that it's a low res proxy of the image, or something similar.

Perceptual hashes are just integer/byte encodings of images that were scaled down and had some transformations applied to them.

If you convert a hash into an array of pixels and reverse the transformations, you'll get some of the original image that was scaled down and hashed.


And if you then apply DLSS, you may even get the original image.


Why is it insane that the law places extremely tight controls on the legitimate distribution of CSAM?


Well the trouble is that it makes developing detection nearly impossible.

And the line where the law is crossed is fuzzy. Say you use an AI classifier, at what accuracy is validating the results of that AI a crime? 50.000001%?


What part of criminalizing the obvious course of action that everyone is taught to do (find evidence of illegal activity, give it to the police) makes any iota of sense to you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: