This is a precautionary tale…
A California man, whose identity has not been disclosed, claims that his life was ‘ruined’ by Google’s artificial intelligence after he took pictures of his toddler’s genitalia to send to the doctor for a virtual consultation.
After taking and sending the photos, the artificially intelligent algorithm flagged the content as extremely harmful, and potentially illegal…
As per the A.I’s own report, police launched an investigation into the man…
They eventually found that he had committed no wrongdoing and closed the investigation against him—he was just a concerned father seeking treatment for his child.
Despite this, Google still refuses to reinstate this man’s accounts—leaving him locked out of the profile he has had for years.
What I can’t for the life of me understand is how a police investigation was launched based on information gathered by or interpreted by a robot.
Everyone knows that artificial intelligence is still in its infancy—the tech is far from perfect and is experimental in every sense of the word.
Still, I don’t doubt that we are going to start seeing a lot of stories like this making their way into the news…
Here’s what we know:
Just as predicted, Google AI’s foray into automatically scanning private pictures is having unintended consequences.https://t.co/ujyOtLQeqs
— Isley (@IsleyResistance) August 22, 2022
Daily Mail explains:
Google gave the reason as the presence of ‘harmful content’ that was ‘a severe violation of Google’s policies and might be illegal.’
Mark tried to appeal the decision but Google denied the request, leaving him unable to access any of his data, and blocked from his mobile provider Google Fi.
It wasn’t until months later that he was informed that the San Francisco Police Department had opened a case against him, because he had lost access to his phone number.
— ᴛᴇᴄʜ ɪɴᴊᴇᴋᴛɪᴏɴ 🖥 💉 🤖 (@techinjektion) August 22, 2022
9 Breaking News had more on how this technology works to flag potentially harmful images:
The technology works by creating a unique fingerprint, called a “hash,” for each image reported to the foundation.
These fingerprints are then passed on to internet companies to be automatically removed from the net.
After an image is targeted, an employee views the contents of the file and analyzes the message to determine if it should be turned over to the appropriate authorities.
The system that uses the same technology as Facebook, Twitter and Google to track down child abusers.