Digital manipulation: ‘Deepfake’ fear in election, when every video creates suspicion

In today’s world, the gravest assault on democracy no longer comes through ballot-box stuffing or the menacing presence of armoured vehicles. It arrives silently on our smartphones, where a hyper-realistic, AI-generated video can shake a voter’s trust within seconds. Bangladesh’s 13th parliamentary election date was announced on 12 December. With the national polls approaching, the country now finds itself facing an invisible, algorithm-driven “uprising”.

In several recent elections around the world, we have witnessed the deliberate and systematic spread of deepfakes. Voice cloning has become commonplace. Videos are produced showing opposition leaders apparently making bizarre or extreme statements. AI-generated news anchors have been used to broadcast false information. On the eve of polling day, fake announcements have circulated on social media claiming that a candidate has withdrawn from the race.

These are neither jokes nor spontaneous falsehoods; they are coordinated and calculated digital attacks designed to force voters to distrust their own eyes and ears. As a result, the old belief that “seeing is believing” has collapsed, replaced by a new reality: “seeing means suspecting”.

The responsibility for identifying deepfakes no longer rests primarily with technology; instead, it has shifted decisively onto ordinary citizens. Given the speed and scale of social media, no technical detection system can verify every video in real time. This makes individual judgement and basic forensic awareness essential for every Bangladeshi voter. It is in this context that the following digital forensic guide is offered, in the hope that it may help voters navigate this new threat.

A forensic guide to identifying deepfakes

Deepfakes often reveal themselves in the subtlest details of the human face. Careful attention to how light falls on the face or eyes, whether shadows appear natural, and whether the skin tone of the face, neck and hands matches can expose manipulated footage. In many deepfakes, the eyes lack a natural sparkle, blinking is reduced, or the eyes appear slightly asymmetrical.

A deepfake is not merely a false video; it is a weapon of psychological warfare designed to mislead and undermine our capacity for democratic decision-making. In this environment, awareness, scepticism and responsible behaviour are our strongest defences.

Another major giveaway is the mismatch between lip movement and voice. At times, the movement of the mouth does not fully align with the audio; the sound may lag slightly, or the lips may appear unnaturally smooth or rubber-like. In some cases, the audio itself sounds excessively perfect, overly clean and smooth, lacking the natural pauses, breaths or imperfections typical of human speech. These are all classic indicators of deepfake manipulation.

However, deepfakes are not identified solely through technical flaws; behavioural inconsistencies are equally important. If a political leader appears in a video making statements that are entirely inconsistent with their known behaviour, political stance or past rhetoric, the content should immediately raise suspicion.

Deepfakes are often engineered to provoke instant anger or hatred. Therefore, if a video triggers a strong emotional reaction, it demands closer scrutiny. Another common tactic is the use of urgent prompts such as “share immediately”, designed to bypass rational judgement and accelerate the spread of false information.

What needs to be done

The Bangladesh government, the election commission and international partners have recognised the seriousness of this threat, and its implications for democracy. The EC has gone beyond expressing concern and has announced concrete steps. The chief election commissioner has publicly warned that AI could pose a greater threat than traditional forms of electoral violence.

Plans are underway to establish a central cell dedicated to countering deepfakes and AI-generated disinformation. This unit will coordinate with the BTRC, the ICT division and various cyber-security agencies to ensure the rapid removal of harmful content. At the same time, the election commission has proposed amendments to the Representation of the People Order (RPO) to regulate the misuse of AI and social media and to prevent violations of the electoral code of conduct.

International organisations have also joined these efforts. Under UNDP leadership, a project called BALLOT is being implemented, with the involvement of agencies such as UN Women and UNESCO. The initiative goes beyond monitoring; it focuses on raising awareness among media professionals and citizens, building capacity, and strengthening Bangladesh’s electoral process by drawing on international experience. These are vital steps and demonstrate that the state is not ignoring this new digital battlefield.

Yet the harsh realities must also be acknowledged. Due to language barriers, most deepfake detection tools do not function effectively in Bangla, making it extremely difficult to verify viral Bangla-language videos quickly. Independent fact-checking organisations such as Rumor Scanner and Dismislab struggle to operate at scale due to funding constraints, staff shortages and a lack of advanced AI forensic expertise. As a result, the election commission cannot rely on them entirely.

Compounding this challenge is the declining level of public trust in the media, which leads many to view official explanations or fact-checks through a political lens. This further complicates efforts to counter disinformation. Moreover, a significant segment of the population, particularly in rural and poorly connected areas, still lacks digital literacy, making them especially vulnerable to deception by deepfakes.

Addressing these challenges requires a coordinated approach in which citizens themselves become the first and most critical line of defence. When encountering suspicious videos or posts, citizens should take three key steps:

First, they should report the content directly on the platform where it appears, using options such as “False Information” or “Impersonation”, enabling algorithms to act swiftly.

Second, the content should be forwarded to independent fact-checking organisations such as Rumor Scanner or Dismislab, allowing them to verify the material and inform the public.

Third, if the content violates the electoral code of conduct, incites violence or targets a specific candidate, it must be reported to the local election commission office or the police cybercrime unit (phone: 01320010148; email: [email protected]; Facebook: https://www.facebook.com/cpccidbdpolice; complaints regarding misogynistic digital content in election campaigns can be lodged at: 01320000888).

A deepfake is not merely a false video; it is a weapon of psychological warfare designed to mislead and undermine our capacity for democratic decision-making. In this environment, awareness, scepticism and responsible behaviour are our strongest defences.

By respecting government and international initiatives while maintaining personal vigilance and accountability, we can collectively defeat this form of deception. Our goal must be to transform every mobile screen into a window for verifying truth, so that our democratic choices rest on facts, not confusion. The choice before us is clear: will we practise conscious media consumption, or surrender to an invisible algorithmic uprising?

* Rezwan Ul Alam is an associate professor and coordinator, Department of Media, Communication and Journalism, North South University.

* The views expressed are the author’s own.