Different biases play major role in the spread of fake news
Different biases play major role in the spread of fake news

Opinion

Disinformation and misinformation: The battle for truth in post-uprising Bangladesh

In September 2025 alone, Bangladesh witnessed a staggering 329 documented cases of misinformation circulating across social media and even traditional media channels, according to Rumor Scanner. Of these, roughly 70 per cent concerned political issues, disproportionately targeting rival parties, the interim government, and state officials.

Video content dominated these falsehoods (215 cases), followed by text (79) and image-based manipulations (35). AI-generated content and deepfakes are no longer fringe tactics, 18 cases involved AI, and 13 were deepfakes, underscoring the sophistication of modern disinformation campaigns.

The Khagrachhari-Rangamati clashes provide a chilling illustration of how disinformation can inflame tension, spread panic, and become a tool for cross-border political narratives. Dismislab and Rumor Scanner confirmed that in social media circles, the reported death toll of four victims was inflated to 32, 67, and some even exceeded 100.

Videos and photos were deliberately mislabelled, old footage was reused, and posts framed the conflict as a genocidal attack on indigenous communities. In Bangladesh, supporters of the recently ousted Awami League used the same false death tolls to attack the interim government and Muhammad Yunus.

Petitions circulated online, and hashtags like #StepDownYunus gained traction, a grim reminder of how false information can permeate political campaigns, cross borders, and mobilise international attention.

This is far from an isolated case. Across the country, AI-generated or doctored content has been used to manipulate perceptions of both civilians and security forces. In September alone, videos circulated claiming hill communities were fleeing due to army and BGB aggression, and that soldiers were injured while intervening a clash in Khagrachhari.

Dismislab confirmed these were old incidents, sometimes from as far back as 2018, yet the visuals provoked outrage, calls for punitive action, and communal suspicion. Similarly, doctored photo cards attributing false quotes to press secretary Shafiqul Alam, such as “Bangladesh does not run with remittance from migrant workers” went viral, falsely framing official statements.

The September statistics also reveal that misinformation often targets gendered vulnerabilities. Between January and September 2025, Rumor Scanner identified 567 false claims involving 276 women, spanning politics, entertainment, and civic activism. High-profile female leaders—including Sheikh Hasina, Tasmim Zara, and Samanta Sharmin—were subjected to campaigns that used doctored videos, fabricated AI content, and sexualized false narratives to discredit them. Entertainment figures, such as Sadia Ayman, Nusrat Imroz Tisha, and Meher Afroz Shaon, have similarly been victimised, often with AI-generated visuals or videos of women from other countries falsely attributed to Bangladeshi celebrities.

During the Covid-19 pandemic, disinformation around vaccines and treatments cost lives worldwide. Poland experienced similar cross-border manipulations in 2025, when Russian-linked networks inflated casualty figures during drone incursions to destabilise trust in NATO

Political targeting of women is particularly pronounced. Among seven active political parties, female leaders and activists were systematically subjected to false claims. The Awami League’s women received the brunt (188), with Sheikh Hasina being linked to 148 misleading pieces, many crafted to reinforce her image positively while simultaneously confusing narratives.

The National Citizen Party (NCP) saw seven female leaders targeted, some with sexually explicit AI-generated content. The Bangladesh Nationalist Party (BNP) similarly had nine female leaders involved in 23 misinformation cases.

This is not limited to prominent figures: ordinary women, family members of officials, and July 2024 movement participants have been repeatedly misrepresented in doctored videos or falsely identified as political actors. These figures were identified in data collected by monitoring platforms such as Rumor Scanner, reflecting systematic targeting across gendered lines.

AI-manipulated content, deepfakes, and fabricated sexual allegations have been employed systematically to intimidate women, tarnish reputations, and discourage female political participation. Social media campaigns that weaponise gendered narratives amplify psychological harm, weaken social cohesion, and deter civic engagement — a phenomenon echoed globally, from targeted disinformation against female politicians in the US midterm elections to harassment of women journalists in Pakistan and India.

Misinformation is weaponised not just for political gain, but for social destabilisation. Old footage is relabelled as new; AI deepfakes are used to fabricate events; gendered attacks reinforce societal norms that marginalize women; communal tensions are stoked with false statistics or videos; and cross-border political actors amplify these falsehoods to pursue their own agendas.

The September 2025 Rumor Scanner data also reveal a worrying pattern of misinformation across platforms and demographics. Facebook remains the primary vector (289 incidents), followed by Instagram (156), TikTok (82), and X (40). Domestic media outlets were involved in 13 incidents, while Indian outlets contributed two. Political parties, whether the Bangladesh Jamaat-e-Islami (69 incidents, mostly negative) or the suspended Awami League (145 incidents, mostly positive), were systematically represented to influence public perception. State forces, from the Bangladesh Army to the Rapid Action Battalion, were not immune, highlighting that even institutions tasked with law enforcement are now central targets in disinformation battles.

Misinformation also intersects with communal narratives. Rumor Scanner identified 142 instances of communal disinformation in the first eight months of 2025, 37 per cent involving women. Sensitive topics like sexual assault were deliberately framed in communal terms, inflaming potential conflict.

Globally, Bangladesh’s experience is not unique. In 2019–2020, Australia’s bushfires saw the rise of the hashtag #ArsonEmergency, spreading false narratives that arson — not climate change — caused the fires, derailing meaningful policy discussion.

During the Covid-19 pandemic, disinformation around vaccines and treatments cost lives worldwide. Poland experienced similar cross-border manipulations in 2025, when Russian-linked networks inflated casualty figures during drone incursions to destabilise trust in NATO. In each case, false narratives moved faster than fact, exploiting fear, emotion, and pre-existing social fault lines.

What makes the current Bangladeshi context particularly fragile is the intersection of multiple vulnerabilities: a history of politicised media, social tensions in hill and minority communities, and an ongoing transition from a long-dominant party system.

Misinformation is weaponised not just for political gain, but for social destabilisation. Old footage is relabelled as new; AI deepfakes are used to fabricate events; gendered attacks reinforce societal norms that marginalize women; communal tensions are stoked with false statistics or videos; and cross-border political actors amplify these falsehoods to pursue their own agendas.

The consequences are tangible. Misinformation about the Khagrachhari-Rangamati clashes, AI videos of civilians fleeing, and doctored statements about national leaders have already provoked panic, protests, and violent confrontations. Trust in official reporting and institutions erodes. Civic engagement declines when ordinary citizens fear that participating publicly could make them targets of false campaigns.

This crisis may be addressed through multi-pronged approaches like strengthening fact-checking infrastructure such as Rumor Scanner and Dismislab through institutional support, resources, and legal protection. Public campaigns should teach citizens to critically assess sources, verify visuals, and recognise AI-generated content. Social media companies must deploy rapid, localised moderation teams familiar with Bangla and regional nuances to detect and flag harmful content. Government transparency must be ensured, and gender-sensitive measures are necessary to respond to online harassment, deepfakes, and sexualised false content.

Bangladesh’s digital landscape is at a crossroads. The surge of disinformation, exemplified by 329 September 2025 cases, demonstrates that the stakes are no longer merely reputational — they are political, social, and existential. In an era where a single doctored video or false headline can spark panic, inflame communal tensions, or undermine political stability, the battle for truth is inseparable from the battle for democracy itself. Societies worldwide face this challenge, but for Bangladesh — emerging from decades of authoritarian imprint and navigating complex social fractures — the urgency is heightened. Vigilance, media literacy, institutional credibility, and civic responsibility are not optional; they are the front lines in defending the very fabric of the nation.

In this post-uprising Bangladesh, truth has become a scarce but vital resource. Preserving it may well be the most revolutionary act of all.