With elections due in countries representing half the world’s population and new technologies turbo-charging disinformation, 2024 will be a major stress test for politics in the age of AI.
2024 has been labelled a “make-or-break” year for democracy, with crucial votes due in more than 60 countries, including India, South Africa, Pakistan, Britain, Indonesia and the United States, as well as the European Union.
The first major test of how to survive an onslaught of AI-powered disinformation has already taken place.
Taiwan voters backed Lai Ching-te for president last week despite a massive disinformation campaign against him, which experts say was orchestrated by China.
Beijing regards Lai as a dangerous separatist for asserting Taiwan’s independence, and TikTok was flooded with conspiracy theories and derogatory statements about him in the run-up to the vote.
An AFP Fact-Check investigation found several such videos originated on Douyin, China’s version of the app.
How things pan out in other countries remains to be seen, however. Generative AI is threatening to exacerbate deepening trends of polarisation and a loss of trust in the mainstream media.
Already last year, fake images of Donald Trump being arrested or Joe Biden announcing a general mobilisation to support Ukraine have shown how far the technology has progressed.
The last, easy tells for fakery -- notably, AI’s struggles with details such as fingers -- are rapidly disappearing, blunting detection mechanisms.
And the stakes are high.
The World Economic Forum (WEF) ranked disinformation as its number one threat over the next two years.
Undermining the legitimacy of elections could lead to internal conflicts and terrorism, and even “state collapse” in extreme cases, it warned.
AI-powered disinformation is being deployed by groups linked in particular to Russia, China and Iran, seeking to “shape and disrupt” elections in rival countries, said analysis group Recorded Future.
The EU elections in June will likely be hit by campaigns aimed at undermining the cohesion of the bloc and its support for Ukraine, said Julien Nocetti, a Russia specialist for the French Institute of International Relations.
It would not be the first time.
The “Doppelganger operation” launched in early 2022 used clones of well-known media and public institutions to spread pro-Russian talking points, particularly about Ukraine.
French authorities and Meta -- owner of Facebook, WhatsApp and Instagram -- linked it to the Kremlin.
Paradoxically, repressive regimes could also use the threat of disinformation to justify greater censorship and other rights violations, the WEF said.
States hope to fight back with legislation, but they are working at a glacial pace compared to the exponential progress in AI.
The forthcoming Digital India Act and the EU’s Digital Services Act will require platforms to target disinformation and remove illegal content. Experts are sceptical, however, about their enforcement capabilities.
China and the EU are both working on comprehensive AI laws, but they will take time. The EU law is unlikely to be completed before 2026.
In October, US President Joe Biden issued an executive order on AI safety standards in October.
But critics say it lacks teeth, while some lawmakers fear that over-regulation will hamper their tech industry and benefit rivals.
Under pressure to act, tech firms have introduced their own initiatives.
Meta says advertisers will have to reveal if their content used generative AI, while Microsoft has a tool for political candidates to authenticate their content with a digital watermark.
But the platforms increasingly rely for verification on... AI.
“Automating the fight against disinformation doesn’t seem like the best way to understand hostile strategies,” said Nocetti.