Thursday, March 28, 2024

Apple to start scanning iPhones for child abuse images

Share

Apple has said it will begin scanning customers’ devices for images of child sexual abuse in an effort to protect the young and prevent the spread of such material.

Announcing the move on Thursday, August 5, the tech giant said it will use technology in upcoming versions of iOS and iPadOS to detect illegal child imagery on an Apple-made smartphone or tablet.

How it works

Apple said that before an image is uploaded to the iCloud, a detection tool called neuralMatch will conduct an on-device matching process using a database of sexual abuse imagery already known to the National Center for Missing and Exploited Children (NCMEC). The company said the technology has been designed with user privacy in mind, explaining that it doesn’t view a device’s images but instead uses a digital fingerprint linked to the content that enables it to check for a match.

If the system detects images of child sexual abuse, the case will be reported to NCMEC and passed to law enforcement. The user’s Apple account will also be deactivated.

Apple’s Messages app will also use on-device machine learning to warn children and their parents when receiving or sending sexually explicit photos. Siri and Search will be updated, too, so that if someone performs a search related to child sexual abuse, they’ll be informed that their interest in the topic is harmful before being directed to resources offering help.

Response

While child support groups have welcomed Apple’s move, others are voicing concern that the system could be used in an underhand manner.

Leading cryptography researcher Matthew Green of Johns Hopkins University said in a series of tweets that the system could potentially be used by miscreants to land innocent victims in trouble by sending them seemingly innocent images designed to prompt an alert.

But Apple insists the system features “an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account,” adding that a human reviewer will always examine a flagged report before deciding whether to escalate it.

The company said that if a user feels their account has been mistakenly flagged, “they can file an appeal to have their account reinstated.”

But there are also concerns that authoritarian governments may try to use the system to monitor citizens such as activists who oppose a regime.

In further analysis, Green said, “Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you.”

The researcher continued: “Whether they turn out to be right or wrong on that point hardly matters. This will break the dam — governments will demand it from everyone. And by the time we find out it was a mistake, it will be way too late.”

Meanwhile, John Clark, president and CEO of NCMEC, described Apple’s move as “a game-changer,” adding, “With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material.”

Apple said the changes will arrive first in the U.S. in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey later this year.

Read more

More News