Apple CSAM Detection
Discussions center on Apple's on-device CSAM scanning using perceptual hashes from databases like NCMEC, raising concerns about privacy, false positives, hash collisions, and potential abuse by governments or adversaries.
Activity Over Time
Top Contributors
Keywords
Sample Comments
> Appleās method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child-safety organizations. Apple further transforms this database into an unreadable set of hashes, which is securely stored on usersā devices.<a href="https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf" rel=
No, the terrible misfeature that this group wants is āgovernment provides a bunch of opaque hashes that are āCSAMā, all images are compared with those hashes, and if the hashes match then the user details are given to policeāNote that by design the hashes cannot be audited (though in the legitimate case I donāt imagine doing so would be pleasant), so thereās nothing stopping a malicious party inserting hashes of anything they want - and then the news report will be āperson x bought in for que
If he wins, will the hash of this picture be added to Apple CSAM detection system?
CSAM is a hash database. The images are converted to a hash and then compared to the hashes of known pornography of children, not directly viewed.The weirdly less discussed aspect of this is that anyone who is storing their images of any kind on someone elseās computer and network thinks that nothing could have been viewed before. If Apple or Google or Amazon want to scan the data you store with them they could be doing it, so if that was a concern for a person from the get go then they would
If I'm reading this right? Apple is saying they are going to flag CSAM they find on their servers. This article talks about finding a match for photos by comparing a hash of a photo you're testing with a hash you have, from a photo you have.Does this mean Apple had/has CSAM available to generate the hashes?
Apple has no way to know what image hash has been derived from. So China can give a hash of Winnie the pooh and claim it is CSAM. Apple won't know.
It might be useful to read the threat model document. Associated data from client neural hash matches are compared with the known CSAM database again on the server using a private perceptual hash before being forwarded to human reviewers, so all such an attack would do is expose non-private image derivatives to Apple. It would likely not put an account at risk for referral to NCMEC. In this sense, privacy is indeed preserved versus other server scanning solutions where an adversarial perceptual
Imagine this scenario.- You receive some naughty (legal!) images of a naked young adult while flirting online and save them to your camera roll.- These images have been made to collide [1] with "well known" CSAM images obtained from the dark underbelly of the internet, on the assumption that their hashes will be contained in the encrypted database.- Apple's manual review kicks in because you have enough such images to trigger the threshold.- The human reviewer sees a b
Iām not even sure if it's a joke or you are serious.It is a check against existing hashes in a big database of confirmed CSAM. What are the chances that photos of your partner are in that database? If your partner is older than 12 - it's 0%.Who is taking more risk to be sued for the leakage of the photos, you or Apple?The last part doesn't worth to be discussed because children in that DB are younger than 12.
I don't think they should of included an image that triggers false positives in CSAMedit +reference https://www.hackerfactor.com/blog/index.php?/archives/929-On...