Researchers warn that iPhone devices may be monitored

Aug 6, 2021 12:04 GMT  ·  By

Security researchers are concerned that Apple's intention to install software that searches for child abuse images on iPhones in the United States could enable the surveillance of millions of smartphones, according to Ars Technica

The proposed system is called neuralMatch and it was presented to a group of American academics at the beginning of this week. Security researchers, while supporting efforts to stop child abuse, have expressed concerns that Apple may be inadvertently supporting the actions of repressive governments by requesting personal information about its residents, that could go far beyond Apple's original goal.

Ross Anderson, professor of security engineering at the University of Cambridge said, “It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of . . . our phones and laptops”. It may be re-trained to recognize targeted imagery and language, such a anti-government postings at protests, for instance.

Since Apple went to court in 2016 for accessing a terror suspect's iPhone ( FBI ), tensions have only increased between law enforcement and tech giants like Apple and Facebook, who defend their increased use of encryption in their products and services. The new technology might be the bridge of reconciliation between the two.

Apple's well-intended scanning for illicit child abuse photos raises serious privacy issues

The neuralMatch algorithm constantly scans the iPhone pictures on the device and iCloud backup. A procedure known as hashing converts users' photos into a long series of numbers and compares them to a database of known images of child sexual assault. After being trained on 200,000 photographs of child sexual abuse, the system has been placed on another 200,000 images donated by the National Center for Missing and Exploited Children.

In short, once the system discovers illicit photos, it would proactively convey notifications to a team of human reviewers, who will subsequently call law enforcement agencies if the material was determined to be illegal. The operation will be introduced for the first time solely in the United States.

According to some sources, every picture uploaded to iCloud in the United States will get a safety certificate that indicates whether the picture is suspicious or not. Apple will decrypt the suspicious photos and, if the material appears to be illegal, submit the results to the proper authorities.