TLDR
- Brazil’s data protection authority (ANPD) has banned Meta from using Brazilian personal data to train its AI models.
- The decision follows Meta’s privacy policy update in May, which allowed it to use public Facebook, Instagram, and Messenger data for AI training.
- ANPD cites “imminent risk of serious and irreparable damage” to users’ fundamental rights.
- Meta has five working days to comply or face daily fines of 50,000 reais (about $8,808).
- This follows similar pushback from regulators in the European Union.
Brazil’s national data protection agency (ANPD) has taken a step to protect its citizens’ privacy by ordering Meta to stop using Brazilian personal data to train its artificial intelligence models.
This decision, announced on Tuesday, comes in response to Meta’s recent privacy policy update that granted the company permission to use public posts, images, and captions from Facebook, Instagram, and Messenger for AI training purposes.
The ANPD’s ruling cites “imminent risk of serious and irreparable damage” to the fundamental rights of Brazilian users. With over 102 million Facebook users and 113 million Instagram users in Brazil, the country represents a significant market for Meta, making this decision particularly impactful.
The agency’s concerns are not unfounded. A recent report by Human Rights Watch revealed that LAION-5B, one of the largest image-caption datasets used to train AI models, contains identifiable photos of Brazilian children.
This discovery raised alarms about the potential for deepfakes and other forms of exploitation, highlighting the urgent need for stricter data protection measures.
Under the ANPD’s order, Meta has been given five working days to demonstrate compliance with the directive by amending its privacy policy to exclude the use of personal information from public posts for AI training. Failure to do so will result in daily fines of 50,000 reais (approximately $8,808 or £6,935).
Meta, in response to the decision, expressed disappointment, stating that their approach complies with local privacy laws. A company spokesperson said,
“This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil.”
However, privacy advocates and data protection experts have welcomed the ANPD’s proactive stance.
Pedro Martins from Data Privacy Brasil pointed out discrepancies between Meta’s data protection measures for Brazilian and European users.
In Europe, Meta had planned to exclude data from users under 18 from AI training, while in Brazil, posts from children and teenagers were potentially included. Martins also noted that the opt-out process for Brazilian users was more complicated, potentially taking up to eight steps, compared to a more straightforward process in Europe.
This decision by Brazil’s ANPD mirrors similar concerns raised in the European Union. In June, Meta paused its plans to train AI models on European users’ data after receiving a request from the Irish Data Protection Commission. The company had initially planned to implement the policy change in Europe on June 26, but this has been put on hold pending further review.
The pushback against Meta’s data collection practices for AI training is part of a broader global conversation about privacy, data protection, and the ethical development of artificial intelligence.
For Meta, this setback in Brazil, following similar challenges in Europe, may force a reevaluation of its global AI strategy. The company will need to navigate an increasingly complex regulatory landscape while still pursuing its AI development goals.