Opinion
How AI and deepfakes can create alarm in the elections
• Fake videos, fabricated posts and concocted campaigns created with the help of artificial intelligence are being widely used on social media with the aim of influencing voter psychology. • Incidents of AI-driven disinformation in Bangladesh and elsewhere around the world show that the misuse of technology can distort the electoral process and pose a serious threat to democracy. • It is crucial for political parties and candidates to fact-check information in their own campaigns, clearly identify their official digital channels, and take swift action against suspicious content.
Artificial intelligence (AI) is a significant component of contemporary technological advancement. It is capable of functioning by mimicking human-like thinking, learning abilities and decision-making processes. While AI is bringing progress and convenience to society, its misuse can also give rise to various complex problems. In particular, its application poses risks in areas such as verifying the authenticity of information, personal privacy and social security.
The misuse of artificial intelligence in fields such as elections, business and healthcare can create confusion, cause harm, and lead to ethical and social deadlocks. For example, there have been international instances of AI-based deepfake (near-perfect replicas of reality) technology being used to spread misinformation in the healthcare sector.
In the economy, too, there are major examples of fraud carried out using artificial intelligence. Globally, there have been incidents of deepfake scams in which the voices and videos of senior officials were impersonated to siphon off employees’ bank accounts or obtain sensitive institutional information. These tactics have not only caused financial losses but have also eroded public trust and confidence in institutions.
2.
Just as technological advancement has transformed society in Bangladesh, it is also opening a new chapter in electoral politics. With the expansion of digital platforms and the easy availability of AI-based tools, political campaigning is no longer confined to stages or leaflets; fake videos, fabricated posts and concocted campaigns created with the help of artificial intelligence are now being widely used on social media to influence voter psychology.
In the context of Bangladesh, especially with the 13th parliamentary election approaching, the issue has come to the forefront of contemporary political discourse. AI-generated content is being seen as a major threat, as it can mislead voters with false information and call into question the very foundations of democracy.
Across the world, AI-driven disinformation has been used during elections to misdirect voters and shape or derail public opinion. The impact of automated social media tools and misinformation campaigns during the 2016 and 2020 elections in the United States has been widely discussed; similar instances of AI-based videos and false posts have been observed in Europe and South Asia. These have affected electoral trust and democratic transparency.
During Slovakia’s 2023 parliamentary election, an attempt was made to mislead voters by circulating a fake audio clip on social media that mimicked the voice of an opposition leader. In the 2024 New Hampshire primary election, AI-generated robocalls were used to confuse voters by spreading false information about voting dates or procedures.
There is also evidence from European Union elections of AI-driven chatbots being used to provide misleading guidance to voters. This is seen as an attempt to influence the behaviour and participation of ordinary voters. Such content typically goes viral quickly, blurring the distinction between real and false information and creating misunderstandings about the electoral process.
According to research by the German organisation Konrad Adenauer Foundation (KAS), numerous incidents of deepfakes have occurred in recent elections around the world. Examples include the United States, Turkey, Slovakia, Argentina, Indonesia, India, Poland, Bulgaria, Taiwan, Zambia and France.
Most recently, ahead of Moldova’s 2025 parliamentary election, a large volume of AI-driven disinformation was spread. The aim was to turn public opinion against the government, using more than a thousand YouTube channels, TikTok and Facebook accounts. This enabled the dissemination of Kremlin-backed propaganda.
Such coordinated “engagement farms” have helped create false perceptions among voters and attempted to undermine trust in the country’s pro-European party PAS.
In Bangladesh’s 2024 parliamentary election, a deepfake video was circulated in a constituency in Gaibandha falsely announcing that a candidate had withdrawn from the race—an assertion that was later proven untrue. Although the candidate eventually won, the incident raised questions at the time about the credibility of the electoral process.
When political trust is undermined through AI-driven disinformation, voters may lose confidence in the electoral system itself.
This can lead to a decline in political participation, a growing tendency to abstain from voting, or increased scepticism about the act of voting itself. Misinformation can tilt the political landscape in such a way that false narratives take precedence over substantive issues. Incidents in Bangladesh and elsewhere around the world in which AI-driven disinformation has distorted electoral processes demonstrate that the misuse of technology is dangerous for democracy.
3.
In Bangladesh, the use of artificial intelligence and deepfake technology has grown rapidly since mid-2025, surpassing the levels seen in 2024. An analysis by Dismislab shows that in the second quarter of 2025, the number of AI-generated videos and images used in the country increased significantly.
More than 1,361 unique instances of misinformation were identified, of which nearly one thousand were related to political matters. One example of targeted misinformation was a photo card that went viral on social media, falsely claiming that a leader of the Ganadhikar Parishad had been misrepresented, a claim later debunked through fact-checking.
FactWatch reports also show that AI-generated deepfake videos are being created and circulated to produce misleading content against political parties or administrative officials.
According to a report by the Dhaka Tribune, AI-driven videos on Bangladesh’s social media have altered the statements of political leaders and administrative officials, creating confusion and division.
A research report notes that ‘fake information and malicious campaigns’ have posed unprecedented risks to the credibility of the electoral process, social stability, and the political participation of women and marginalised communities. The aim of such misinformation or AI-generated content is generally to influence voter psychology, damage the reputation of opponents, create division and tension, and mislead public opinion regarding election campaigns, thereby endangering free and fair elections. Addressing this situation requires a multidimensional approach in Bangladesh.
Fact-checking platforms must be strengthened, the Election Commission’s technical capacity enhanced, and political parties and media compelled to follow transparency and ethical campaigning. Moreover, an effective framework integrating digital security and electoral laws is urgently needed
Firstly, fact-checking platforms and truth-verification initiatives must be strengthened so that voters can quickly verify which information is false.
Secondly, the Election Commission and the government need to enhance their technical capacity to detect AI-driven misinformation and take legal action. As part of this effort, plans have already been proposed to launch a dedicated mobile app or a digital monitoring system.
Thirdly, political parties must ensure transparent campaigning and adhere to digital ethics, committing themselves against the use of deepfakes or fabricated information.
Fourthly, the media and civil society need to focus on raising public awareness, preventing misinformation, and promoting evidence-based reporting. Citizens themselves should pay attention to quickly verifying the truth whenever they encounter controversial content on social media and avoid spreading false information. Educational institutions and youth organizations can provide training in misinformation detection so that young voters become proficient in digital analysis.
An effective framework must be established by integrating the Digital Security Act, electoral laws, and appropriate regulations. Such a framework should ensure that the dissemination of AI-driven misinformation is strictly punishable. If, in Bangladesh’s upcoming elections, such narratives reach voters and distort public opinion, the impact will extend beyond election results, negatively affecting the country’s democratic stability and political credibility. Therefore, timely high-level preparation, enhanced technical capacity, and the development of political and social awareness are essential.
Ultimately, the key to safeguarding against the misuse of AI technology lies in information awareness, technical preparedness, a robust legal framework, and a culture of accountable politics—all of which will ensure free, transparent, and fair elections in Bangladesh.
4.
Taking coordinated and multidimensional initiatives to prevent the misuse of artificial intelligence is now an urgent necessity. First, the government and relevant regulatory bodies should formulate clear laws and policies that explicitly define AI-driven misinformation, deepfakes, and digital fraud as criminal offenses, with provisions for swift justice and exemplary punishment.
The Election Commission and political authorities should incorporate transparent guidelines for the use of AI and digital content in electoral codes of conduct and require candidates and parties to commit in writing not to use misinformation or deepfakes.
It is essential for political parties and candidates to fact-check information in their own campaigns, clearly identify their official digital channels, and take swift action against suspicious content. For law enforcement and security agencies, the formation of specialized cyber and AI forensic units is necessary, capable of rapidly detecting, tracing the source of, and removing fake content.
At the same time, the media, technology platforms, and civil society should strengthen coordinated fact-checking, public awareness initiatives, and digital literacy programs. This will enable ordinary people to distinguish between false and true information themselves. Only the collective and responsible actions of all these actors can prevent the misuse of artificial intelligence and safeguard society, security, and democracy.
Law enforcement agencies can play a central role in preventing the misuse of AI. First, they can establish specialised cyber units to identify and investigate AI-driven misinformation, deepfake videos, and digital fraud, monitoring social media, websites, and other digital platforms. Second, they can ensure punishment for offenders and alert the public by conducting swift and exemplary cases under relevant laws.
Finally, it must be noted that Bangladesh was under the grip of authoritarianism for over a decade and a half. Now, there is an opportunity to restore democracy. But if AI is used to influence election outcomes, the country’s democratic stability, voter confidence, and social cohesion could be undermined, political tensions could rise, and international reputation could be damaged.
To address this challenge, fact-checking platforms must be strengthened, the Election Commission’s technical capacity enhanced, and political parties and media compelled to follow transparency and ethical campaigning. Moreover, an effective framework integrating digital security and electoral laws is urgently needed.
* Dr. Md. Mizanur Rahman is an economist and researcher
* The views expressed here are the author's own