📣 OpenAI Halts AI Misuse
posted 31 May 2024
Over the past three months, OpenAI has disrupted five influence operations (IO) where unidentified users misused the company's products for unauthorized activities including generating fake content and political manipulation. These operations predominantly involved creating brief comments, articles in various languages, and text translations.
One intercepted operation, internally referred to as "Bad Grammar" by OpenAI, was conducted by Russians targeting users in Ukraine, Moldova, and the Baltic States through the Telegram messaging app. They used artificial intelligence to refine bots and generate political comments in multiple languages.
Another operation, named "Doppelganger," also initiated in Russia, focused on producing comments in English, French, German, and other European languages that were posted on X. This IO additionally involved crafting entire articles, headlines, and Facebook posts.
The remaining three operations were linked to China, Iran, and Israel, aiming to analyze public activity and generate articles in various languages to push specific political agendas. These were published on multiple platforms including Instagram, Facebook, Medium, and X.
Key themes identified across all operations included Russia's invasion of Ukraine, the conflict in Gaza, the Indian and US elections, political decisions in Europe and the US, and criticisms of the Chinese communist government. OpenAI noted that the use of AI did not lead to a significant increase in audience reach.
Participants in these operations used AI in various ways, such as generating comments for their own posts. OpenAI conducted its investigation using AI tools and published detailed results to share insights with the community and other tech companies.
One intercepted operation, internally referred to as "Bad Grammar" by OpenAI, was conducted by Russians targeting users in Ukraine, Moldova, and the Baltic States through the Telegram messaging app. They used artificial intelligence to refine bots and generate political comments in multiple languages.
Another operation, named "Doppelganger," also initiated in Russia, focused on producing comments in English, French, German, and other European languages that were posted on X. This IO additionally involved crafting entire articles, headlines, and Facebook posts.
The remaining three operations were linked to China, Iran, and Israel, aiming to analyze public activity and generate articles in various languages to push specific political agendas. These were published on multiple platforms including Instagram, Facebook, Medium, and X.
Key themes identified across all operations included Russia's invasion of Ukraine, the conflict in Gaza, the Indian and US elections, political decisions in Europe and the US, and criticisms of the Chinese communist government. OpenAI noted that the use of AI did not lead to a significant increase in audience reach.
Participants in these operations used AI in various ways, such as generating comments for their own posts. OpenAI conducted its investigation using AI tools and published detailed results to share insights with the community and other tech companies.