Automatic checks performed to fight against child abuse

Jan 9, 2020 08:58 GMT  ·  By

Apple explained at CES this week that photos that are uploaded to iCloud are automatically scanned for illegal content as part of the company’s fight against child abuse images.

Technologies like PhotoDNA have long been used by tech giants to check the content that users uploaded to the cloud, and Apple says that it’s using such a system as well to make sure that child abuse material is blocked.

Jane Horvath, Apple’s chief privacy officer, explained at CES that the automatic checks are performed using “matching technology,” which suggests that a PhotoDNA-like system is indeed employed.

Powered by Microsoft

PhotoDNA uses hashing to check if newly-uploaded photos match content that has previously been flagged as illegal.

“Apple is dedicated to protecting children throughout our ecosystem wherever our products are used, and we continue to support innovation in this space. As part of this commitment, Apple uses image matching technology to help find and report child exploitation. Much like spam filters in email, our systems use electronic signatures to find suspected child exploitation,” she said as per The Telegraph.

“Accounts with child exploitation content violate our terms and conditions of service, and any accounts we find with this material will be disabled.”

PhotoDNA was created in 2009 by Microsoft in partnership with Dartmouth College, but the same technology is now being used by a long list of organizations and companies in the tech industry.

Microsoft has since donated PhotoDNA to the National Center for Missing & Exploited Children as part of the fight against child abuse content.

At the same time, Microsoft started offering PhotoDNA as a service on Azure back in 2015, essentially giving other companies the option to run similar checks against the existing database of flagged images to make sure that the content uploaded by customers is legal.