Gmail has an automated system looking for specific hashes of illegal photos

Aug 5, 2014 13:40 GMT  ·  By

Google helped with the arrest of a man who, as you may have heard, was sending indecent images of children to a friend. The news made the world have contradictory feelings about Google’s email scanning activities because, on the one hand, it’s great that the man was captured, but on the other hand, there’s a big question about everyone’s privacy level while using Gmail.

Google has come out and said that they are only scanning emails for advertising, and child abuse footage is the only type of content they flag while scanning. This means that other type of criminal activities remain under the radar or, at least, they’re not reported by Google to the authorities.

Even so, the company’s assurances haven’t really made people relax about the whole situation because it’s long been feared that Google oversteps its boundaries when it scans emails, which is also considered a controversial practice.

The company has even been sued for its email scanning habits, but since April, everything has been put in the Terms of Service to make sure that Google is no longer liable in any court for scanning emails for advertising purposes.

So, how does Google detect indecent pictures but leaves everything else alone? Well, the company has been working with authorities, such as the National Center for Missing and Exploited Children, for many years.

In the time that has passed since then, the Internet giant has built a database full of hashes, also known as photo fingerprints, for various child abuse images. Each one is unique and they’re attached to a certain image, so it doesn’t matter if someone changes the name of the file.

When Google scans the email and its contents while the messages is being sent, received and stored to the cloud, as per the company’s ToS, the system also detects these hashes. The company is then legally obligated to report them to the authorities, who can then obtain warrants and eventually arrest the culprits.

The company is adamant about keeping its powers restricted to fighting against child abuse and has been working to take down any links from its search results that may lead to such sites, as well as actual images from the search engine, not just Gmail.

Other companies have similar systems in place. Microsoft, for instance, has PhotoDNA, a piece of software that can be used to detect images of abuse. Similarly, this one too can calculate a mathematical hash for images of child sexual abuse, which immediately recognizes photos even if they were altered in one way or another. Both Facebook and Twitter use this technology.

Therefore, you shouldn’t worry about Google alerting the cops if you share family photos with your kids with some close family and friends.