Skip to content

Meta's Oversight Board Expresses Concern that Meta's New Policies may Infringe upon Human Rights

Meta's approach to human rights falls short of what a responsible company ought to achieve.

Meta's Oversight Board expresses concerns that Meta's upcoming policies may infringe upon human...
Meta's Oversight Board expresses concerns that Meta's upcoming policies may infringe upon human rights issues.

Meta's Oversight Board Expresses Concern that Meta's New Policies may Infringe upon Human Rights

Meta, the parent company of Facebook and Instagram, is under intense scrutiny for its recent policy changes, which have loosened content moderation around hate speech and rolled back fact-checking, particularly in the United States.

Critics argue that these changes may facilitate identity-based violence and discrimination, putting vulnerable groups at increased risk. The LGBQ+ community, immigrants, and minorities are among the groups that could be most affected.

The Oversight Board, an independent body that reviews Meta's content decisions, has highlighted structural incoherence and lack of transparency in Meta’s content policies, especially related to hateful ideologies. The board found that Meta’s internal definitions for content removals were overly broad, not clearly disclosed, and failed to comply with international standards on freedom of expression under the ICCPR.

Despite these concerns, Meta's CEO, Mark Zuckerberg, has stated that the company's platforms are meant to be a place "where people can express themselves freely" - even if things get ugly.

The Oversight Board has requested Meta to clarify and publicly define relevant policy terms, but these transparency steps have been deemed insufficient to address deeper policy flaws.

Furthermore, through new regulations in the EU, Meta plans to suspend political, electoral, and social issue advertising, affecting charities and nonprofits advocating for rights, including those supporting minorities and marginalized communities.

No definitive public statement from Meta explicitly evaluating the negative human rights impact of these policy shifts on LGBQ+ individuals, immigrants, or minorities has been found. However, external watchdogs and human rights organizations view Meta’s rollback in moderation as regressive and likely to exacerbate risks for these groups.

In February, Amnesty International warned that Meta's policies may fuel more mass violence and genocide. The board has also expressed concern about dehumanizing speech about disabled people that Meta's systems failed to detect.

The training materials for newly permissible speech include examples like "Immigrants are grubby, filthy pieces of shit", "Black people are more violent than whites", or "Trans people are mentally ill". Meta's new policies use dogwhistle terms like "transgenderism".

The Oversight Board has overturned Meta's decisions to keep up content that included a racist slur and generalizations of migrants as sexual predators. However, the board has sided with Meta on some issues, but disagreed on others, echoing their larger concern about potential human rights violations.

Former Facebook employee Sarah Wynn-Williams, in her book, details Zuckerberg's policy-making without consultation and disregard for potential harms. The Human Rights Campaign has recognized that Meta's changes will normalize anti-LGBTQ+ misinformation and intensify anti-LGBTQ+ harassment.

The board has requested Meta to address adverse impacts that its policies may have on communities like LGBQ+ people, including minors, and immigrants. The board has also requested an evaluation of how recent policy changes may impact human rights.

Meta's role in the Rohingya genocide has been a subject of controversy in the past. Despite these ongoing concerns, Meta has not yet been reported to have conducted or publicly released a comprehensive evaluation specifically addressing these human rights concerns.

  1. The rollback in content moderation policies by Meta may have significant implications for the future of social-media platforms, particularly in relation to the increasing risk of identity-based violence and discrimination towards vulnerable groups, such as the LGBQ+ community, immigrants, and minorities.
  2. External watchdogs and human rights organizations have raised concerns that Meta's tech-driven approach to content moderation may fall short of international standards on freedom of expression, given the broad and ambiguous nature of its internal definitions for content removals, which could potentially exacerbate hatred and discrimination on platforms like Gizmodo.
  3. The Oversight Board has highlighted the need for Meta to clearly define policy terms and conduct a comprehensive evaluation addressing potential human rights violations, particularly in the areas of technology, entertainment, and social-media, in order to foster a safer and more inclusive digital environment for users moving forward.

Read also:

    Latest