Skip to content

Court halts Meta's AI training using user data

Advocates express worry over consumer issues

Meta's artificial intelligence development should cease using user data due to court order.
Meta's artificial intelligence development should cease using user data due to court order.

Worry about Meta's AI Training: Consumer Center Demands a Temporary Court Ban on Using User Data

Court halts Meta's AI training using user data

Social media giant Meta is facing legal trouble with the North Rhine-Westphalia Consumer Center over its use of user data for AI training. The consumer watchdog has petitioned the Higher Regional Court of Cologne to halt Meta's plans, alleging a breach of European data protection law.

Last week, the consumer center shared its concerns with Meta, asserting that their use of user data for AI training, set to begin on May 27, is illegal and invasive. In response, Meta claims their approach adheres to the European Data Protection Board's guidelines and has given EU users the option to opt-out.

Metahas argued this issue showcases the fragmented European legal landscape that thwarts innovation, weakens corporate security, and discords with the European Commission's and Germany's latest push for technology-driven economic growth.

However, Meta's data practices have been under scrutiny for potential non-compliance with GDPR. The company relies on "legitimate interests" as a basis for processing user data, but this legal ground should not infringe upon users' rights, especially when their data is used for purposes they would not anticipate, like AI training.

Other problematic areas include transparency and informed consent. Critics argue that even though Meta has an opt-out mechanism, it may not fully inform users of how their information is being exploited for AI training.

The inability to erase data once integrated into AI systems raises concerns regarding the right to be forgotten and the right to access personal data. This predicament becomes a growing concern as Meta persists with its plans in the face of pending legal challenges.

Nonetheless, the objective of the consumer center is not to halt the development of AI, but to promote fair and rule-of-law-based implementation.

Sources:

  1. https://www.euractiv.com/section/digital/news/eco-news-facebook-sues-austrian-privacy-group-in-bid-for-transparency-win/
  2. https://www.wired.co.uk/article/facebook-gdpr-ai-training
  3. https://www.theverge.com/2018/5/17/17368030/irish-data-protection-commission-invites-facebook-privacy-complaint-get-paid
  4. https://ec.europa.eu/info/law/law-topic/data-protection/reform/regulation-text_en
  5. https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_2_03_2020_legitimate_interests_en.pdf

The Commission's recent proposal for a directive on the protection of workers from the risks related to exposure to ionizing radiation highlights the need for data protection policies in technology-driven industries, such as data-and-cloud-computing, especially in the context of AI training. This current debacle involving Meta raises questions about the ethical use of user data and adherence to policy-and-legislation, particularly in regards to the General Data Protection Regulation (GDPR). The politics surrounding Meta's data practices shed light on the importance of implementing technology policy effectively, ensuring transparency, informed consent, and protection of individual rights such as the right to be forgotten.

Read also:

    Latest