Searching for Moderation

Inconsistent Moderation and Links to EU-Banned Russian Media in OpenAI’s “ChatGPT Search”

project overview

OpenAI recently launched a web search feature for its popular chatbot, dubbed "ChatGPT Search." Rolled out to premium users a few days before the conclusion of the US presidential election, ChatGPT now officially functions as a search engine, aiming to help users find quality sources and the right information.

Around the same time, OpenAI released its October 2024 Influence and Cyber Operations report, detailing how it prevented the malicious use of its products to protect elections and democratic processes worldwide. We conducted tests to assess the chatbot's tendency to generate political misinformation and propaganda. Furthermore, we examined whether ChatGPT Search would provide links to banned pro-Russian media outlets. Our research found:

  • Unlike Bing and Google, ChatGPT Search has little to no safeguards to preventing it from providing links to certain Russian state-affiliated media outlet banned in the European Union and the United States.
  • ChatGPT Search provides summaries of and links to banned Russian state-affiliated media and sometimes misattributes other media’s coverage to Kremlin-affiliated outlets. For example, when prompted about Russia Today’s media coverage, ChatGPT Search summarizes and links to a news piece by Reuters, yet its summary starts with the words “Russia Today reported…”.
  • Compared to other chatbots, such as Copilot and Gemini, that AI Forensics has previously studied, ChatGPT Search’s moderation on the topic of elections is significantly inconsistent, non-deterministic, and thus insufficient.
  • Due to Open AI’s lack of transparency and access to its use and moderation data, we cannot contest nor confirm OpenAI’s claims of mitigating risks associated with threat actors using ChatGPT to create disinformation and misinformation, but we note several avoidable shortcomings in their approach to moderation of election-related prompts and pro-Russian state-affiliated media content.

As the EU concretizes researcher data access measures, we need to make sure this is also possible for models like OpenAI’s ChatGPT (and ChatGPT Search) so we can have independent scrutiny over companies’ claims. AI Forensics urges OpenAI to prioritize transparency and introduce robust moderation layers to mitigate the risks posed by ChatGPT Search.