Human Rights Watch (HWR) revealed that a German-owned AI prototype called LAION-5B committed a major blunder. The AI model failed to safeguard the privacy of children whose pictures were used to train it. These children’s photos used to train AI were collected from different locations in Brazil.
According to HWR’s statement, the machine even shared the correct names and birthplaces of some of the children in an instance.
The collection of pictures contained different stages of the children’s lives. For some children, it was while they were being born. For some, it was their birthday or half-nude pictures. Their parents did not know about the company’s decision to use such pictures for training exercises. Furthermore, the location the pictures came from seemed chosen for the purpose of covering up the move.
READ ALSO
- China-made AI Sexbots Will Debut In August
- Why AI Will Take Your Job
- Meta AI Meets Roadblock In Europe
- How to Hack the “My Eyes Only” Folder in Snapchat?
- How to Send High-Quality (HD) Photos or Pictures on WhatsApp
HWR spoke out for reasons captured in their following statement in which they mentioned that such pictures can be used to generate pornographic materials by ‘malicious actors’:
“Once their data is swept up and fed into AI systems, these children face further threats to their privacy due to flaws in the technology.
“AI models, including those trained on LAION-5B, are notorious for leaking private information; they can reproduce identical copies of the material they were trained on, including medical records and photos of real people. Guardrails set by some companies to prevent the leakage of sensitive data have been repeatedly broken.
“These privacy risks pave the way for further harm. Training on photos of real children has enabled AI models to create convincing clones of any child, based on a handful of photos or even a single image.
“Malicious actors have used LAION-trained AI tools to generate explicit imagery of children using innocuous photos, as well as explicit imagery of child survivors whose images of sexual abuse were scraped into LAION-5B.”
When confronted with the discovery, the company that owns the AI model, LAION, admitted they got pictures from such areas to train their machine. But they argued that their machine was not as flawed as was painted to be.
LAION went ahead to tell parents who do not want their children’s pictures to be used for training AI to just keep the photos from the internet. This incident raises suspicions about how companies get children’s photos used to train AI.
Discover more from The Correct Blogger
Subscribe to get the latest posts sent to your email.