Image via Daria Photostock / Shutterstock.com
Apple has recently revealed to 9to5Mac that, prior to its announcement that it would be scanning users’ photos for child sexual abuse materials (CSAM), it had already been detecting iCloud mail for similar material since 2019.
The source also suggested that the other scanning takes place on a much smaller scale, referring to—but not clarifying the contents of—“other data.” Since email isn’t encrypted, scanning the contents that pass through Apple’s servers to detect abusive material isn’t the greatest feat.
This has been suggested more than once in the past, but seems to have gone relatively unnoticed, unlike the commotion that has been raised over its recent decision to scour iCloud photos among other data storage that users find sensitive.
One of the hints lies in a now-archived webpage on child safety, where Apple details its efforts to use “image matching technology” to detect and report the exploitation of children. Comparing its technology to email spam filters, it describes the “electronic signatures” used in order to dig up suspicious content.
Another example, 9to5Mac states, is something that Jane Horvath, Apple’s chief privacy officer, said at a conference in January 2020. According to her, the company uses “screening technology” to “look for the illegal images.”
If evidence of CSAM is found, users’ accounts would be disabled. However, at the time, she didn’t specify how the material was discovered, and presumably, no one really questioned it, either.
Now, though, it can be inferred that the new announcement of scanning iCloud photos is simply Apple expanding what it has done before with mail attachments. Of course, it’s not entirely the same; users’ photos are inherently more private, and therefore more sensitive, than what they may choose to send in an email.
The uproar over the company’s new plans continues, it seems.