AI and apprehensions about its use in the election

Artificial Intelligence is seen as a tool that can help strengthen democracy. On the other hand it can also pose a threat to electoral transparency and democratic norms. In Bangladesh, the use of AI in elections has given rise to fresh concern. Since AI has not previously been used in any significant way in Bangladesh’s elections, and those involved in election-related work have not yet developed much expertise in using it, there remains a tangible possibility of misuse. ANM Muniruzzaman writes here about the risks associated with the use of AI in the election.

Image created with AI assistance

AI has transformed the landscape of election campaigning across the world. There was a time when AI was thought of mainly as something that could help with small administrative tasks. Now, however, it has moved to the very center of modern electoral strategy.

AI does not necessarily refer to a single technology. It includes a range of methods, such as predictive analytics, natural language processing, and advanced generative AI. These have become essential tools for contemporary media and campaign strategies.

This change did not happen overnight. It began in the mid-20th century when political advertisements first appeared on television. Later, in the 2000s, came the era of Big Data, when candidates began gathering massive amounts of information from voters and social media. AI has now taken those data-driven strategies to a new level, one that was previously unimaginable.

2.

On one hand AI is seen as something that can support the advancement of democracy, yet on the other hand it can also create threats to electoral transparency and democratic norms.

A positive side of AI is that it helps the people access information, and it helps politicians understand their voters better. As a result, AI has the potential to create more equal opportunities in election campaigning.
However, on the flip side, AI can be used as a weapon. It can spread misinformation quickly, influence public opinion and erode people’s trust in democracy.

It is therefore essential to pay close attention to the strategic use of AI, the ethical challenges involved, and how it can be implemented responsibly.

3.

One of the major strategies behind using AI in election campaigns is analysing voter data, or what is often called “hyper-targeting.” Because of AI, campaigns no longer have to rely on static, outdated information. Instead, they can adjust strategies in real time.

AI platforms analyse many different kinds of information: voters’ mindset, values, which media they consume, and more. Through this, voters are targeted in deeply personalised ways.

This is not limited to knowing which party someone supports or how old they are. The goal is to understand which issues genuinely matter to them. These issues may be national security, environmental protection, or certain social concerns. This capability is known as “political micro-targeting.”
Candidates can use AI to craft more effective and personalised messages for specific voters, especially those who are undecided or wavering. AI helps campaigns predict which messages are likely to motivate someone to actually go vote. These messages are then delivered via social media, television, or digital advertising.

The rise of generative AI has made producing political content much easier. It is now possible to create almost unlimited messages at very low cost. From a simple prompt, these tools can generate text, images, videos, or audio.

This allows campaign ads, fundraising appeals, or social media posts to be produced much more quickly. This benefits smaller parties or candidates with limited budgets as well, since they no longer need large digital teams.

4.

Modern election campaigning runs on real-time dynamics, where public opinion can shift in a matter of moments. In this context, AI-driven sentiment analysis has become an important tool for understanding voter mood. It is also emerging as an alternative to traditional opinion polls.
Using machine learning, these advanced tools can scan millions of social media posts, public forums, and other sources.

They then classify public sentiment as positive, negative, or neutral. This helps candidates understand how people are reacting to a controversy, a policy announcement, or a viral post, and which issues matter most to them at a given moment.

This works like a powerful cycle. First, sentiment analysis reveals which messages are working. Then generative AI produces more content of that kind. Finally, micro-targeting delivers that content to the most receptive audiences.

Because of this dynamic system, campaigns can respond very quickly. However, there is also a risk: candidates may become more reactive to public mood rather than holding onto steady, principled positions.
There is also a positive side. AI can actually empower politicians. It can summarise the vast flow of comments or emails they receive, helping them better understand voter concerns and even craft personalised responses.

5.

The use of AI in political communication creates significant ethical and social risks for society and for the transparency of the electoral process. The greatest risk is that AI can accelerate the spread of misinformation.
With generative AI, deep fakes, which are realistic images, videos, or voices that are almost impossible to distinguish from genuine content, can be created. AI systems are not neutral, so there is a risk of bias.

Due to concerns about security and privacy, experts emphasize additional safeguards and restrictions on AI use. In many cases, election officials lack technical knowledge and expertise in this area.

The data used to train algorithms may already contain pre-existing social or institutional biases. Such biased data can lead machine learning outcomes to be systematically unfair.

Moreover, most AI models are trained primarily on English-language data. This can present political ideals through a narrow Anglo-American lens, undermining linguistic diversity and marginalising large groups of non-English-speaking populations. Extensive use of AI in this way can pose a direct threat to public trust.

6.

Various countries have taken different approaches to AI. In the United States, there are few strong or coherent AI policies. In India, there has been dual-use of AI, highlighting the urgent need for a comprehensive and forward-looking framework.

During the 2024 US elections, both malicious and constructive uses of AI were observed. For example, there was a discouraging of voters in the New Hampshire Democratic primary and using AI in President Biden’s reelection campaign. This sparked new discussions about the ethical use of generative AI in American elections.

Also Read

Similarly, in India’s 2024 general elections, political parties spent nearly 50 million dollars on AI-generated content. This included, among other things, using deep fake technology to create multilingual voiceovers of Narendra Modi and employing AI in various political campaigns. Against this backdrop, the global use and spread of AI in elections has become an important topic of discussion.

7.

In Bangladesh, a new concern has arisen regarding the use of AI in elections. AI has not been widely used in past elections, and election-related personnel have not yet developed significant expertise in AI. As a result, there is potential for misuse. Therefore, relevant authorities must quickly focus on developing skilled personnel and building the necessary infrastructure.

Due to concerns about security and privacy, experts emphasize additional safeguards and restrictions on AI use. In many cases, election officials lack technical knowledge and expertise in this area.

This capability can be developed through partnerships with universities and non-partisan civil society organisations. The main goal is for AI to function as a “supportive tool” for election workers, not as a “black box” or “replacement platform.”

To succeed in this new era, there must be a strong commitment to principles such as transparency, accountability, and auditability. Policymakers can create a framework that balances innovation with the protection of democratic integrity. For example, requiring disclosure when content is AI-generated, ensuring that final decisions or authority remain under human supervision, and instituting mechanisms for independent evaluation of AI systems.

Ultimately, AI reflects the intentions of its user. The future of democratic integrity depends on using this technology constructively and minimising its potential for harm.

* Major General ANM. Muniruzzaman (Retd.) is the President of Bangladesh Institute of Peace and Security Studies (BIPSS)
* The opinions expressed here are the author’s own