Algorithms Do Not Vote

A Collaborative Campaign on Youtube, Tiktok and Microsoft Copilot’s Impact on the EU Elections in 2024

project overview

With 4 billion global citizens called to vote and around 400 million in the EU alone, the integration of Large Language Models (LLMs) into the digital landscape marks a significant moment. As these models become central to social media, which has solidified its presence over the last decade, their unchecked proliferation raises critical concerns about their impact on electoral integrity. This situation is exacerbated by the rapid spread of Generative AI (GenAI) content across platforms, challenging the authenticity and reliability of election-related information. In 2024, the European Parliament Elections will be a major test for the implementation of the Digital Services Act (DSA), as election integrity is among the few systemic risks that Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) are explicitly urged to address.

As the elections approach, a question becomes more pressing: upon a European citizen's search for election-related information, what types of materials are retrieved, and how does their accuracy, legitimacy, and potential generation by AI impact the reliability of the information?

The Algorithms Do Not Vote (ADV) initiative is orchestrating a collaborative effort among civil society organizations, academic scholars, and media outlets to evaluate the influence of algorithm-driven content dissemination during the EU elections. This comprehensive, cross-national, and cross-platform inquiry aims to analyze content from YouTube, TikTok, and Microsoft Copilot pertaining to election-related searches.

The research seeks to address critical questions, such as:

  1. How accurate is the information provided by platforms regarding candidates, parties, and key issues pertaining to the EU elections?
  2. To what extent does it amplify misinformation and polarizing content?
  3. How do various platforms differ in their promotion of specific candidates and parties?
  4. What types of sources are predominantly suggested by the three platforms (e.g., mainstream media or personal blog, local or international)? Are the platforms misquoting the sources introducing a reputational risk for them?
  5. Is AI-generated content on YouTube and TikTok properly labeled by the platform? What role does this content play in the dissemination of misinformation?

The campaign's objective is to urge major platforms to critically assess the societal impact and possible systemic risks caused by their AI-systems in the electoral context, in alignment with Article 34(1)(c) of the Digital Services Act (DSA), particularly advocating for restrictions on AI-generated content concerning sensitive election-related information. The goal is to prevent the frequent production of factual errors and introduce appropriate frictions against automation biases when platforms recommend or produce content regarding elections. Ultimately, the ADV campaign aims to furnish evidence, assessments, and recommendations on the regulation of these platforms during electoral periods to safeguard the integrity of the democratic process.

dedicated medias