It's a wide spread practice used to teach AI

Aug 14, 2019 10:56 GMT  ·  By

Facebook hired contractors to transcribe audio clips collected from its services, a decision that it’s sure to anger the community every more.

It’s getting difficult to keep track of all bad things Facebook is doing, usually by stepping over the privacy line with the gentleness of a bull in a china shop. Not a week passes by without learning of some new immoral (not necessarily illegal) practice from Facebook.

Just recently, Facebook and the U.S. Federal Trade Commission reached a settlement regarding some shady privacy practices, and now we learn that there is much more to the story. And what’s even worse is that these are little bits and pieces that fall off a much bigger puzzle which we can’t see.

Users agreed to it, is the official defense.

Of course, far be it from Facebook to do something illegal, and the company has a valid response. The audio clips were sent over their services, which might technically allow them to use them in some fashion. Facebook insists that the clips are anonymous, according to a Bloomberg report, which doesn’t make it better.

Facebook outsourced the transcribing process to third-party companies, who didn’t know where those files originated from. Some worker came forward because they initially had no idea where the data was from, and the stuff they were supposed to be transcribing contained personal material, along with vulgar content.

Facebook already responded to the accusations and said that “much like Apple and Google, we paused human review of audio more than a week ago.” But this only means they stopped because they were caught. And the defense that others are doing it doesn't really apply.

Truth be told, Google, Amazon, Apple, and probably a host of other companies are doing similar things. The goal is simple, and that is to teach artificial intelligence to understand itself better. No matter how smart an AI is, the software needs to have a solid base and to be corrected. But that’s a long way from using people’s messages to teach AI without informing the users that you’re doing that.