Doctor can't be replaced: Why we need an “arguable” AI partner

Doctors wearing vr simulation with hologram medical technologyrawpixel.com / roungroat

I remember the long, quiet queues at the village clinics in Satkhira where I grew up. Families waited for hours under the heat, speaking in low voices, hoping for a few minutes with a doctor who was likely exhausted. In those settings, the primary challenge is not a lack of data. It is a lack of time.
Bangladesh currently has roughly 0.7 physicians and less than one hospital bed for every 1,000 people. These numbers fall far below World Health Organization recommendations. With a national shortage of over 90,000 doctors, any technological solution we introduce must solve this crisis rather than worsen it.

In late April 2026, Google DeepMind announced its "AI co-clinician" research initiative. This vision suggests a "triadic" model of care: an AI agent working alongside both the patient and the doctor. It is designed to extend a clinician’s reach while keeping the human expert in control. While this is a significant technical achievement, we must ask a difficult question: If these systems are built for well-resourced hospitals in the Global North, will they truly help a rural health complex in Bangladesh, or will they create new forms of inequality?

The vision of a reasoning partner

The DeepMind initiative moves away from the old idea of AI as a simple "black box" that gives a single answer. Their system was tested using the "NOHARM" framework, which focuses on avoiding errors of omission and commission. In blind evaluations, physicians often preferred its synthesized evidence over traditional tools.

This reflects a shift from AI as a judge to AI as a collaborator. It is a principle Bangladesh should endorse, but with caution. A generic "co-clinician" designed in London or California may not understand the messy, fragmented reality of our local healthcare system.

Why collaboration is not enough

In my research on "arguable systems," I have argued that a co-clinician should do more than just offer a recommendation for a doctor to accept or reject. True collaboration requires the ability to disagree.
A doctor in a busy upazila health complex might need to argue with the AI. The patient in front of her might come from a community the AI never encountered during its training. They might speak a dialect the model cannot parse or present symptoms that do not fit the clean patterns of Western datasets.

When an algorithm fails to listen, it commits what philosophers call "epistemic injustice." This happens in two ways. First, "testimonial injustice" occurs when the AI fails to trust what a patient knows about their own body because that patient is not from a wealthy or digitally recorded demographic. Second, "hermeneutical injustice" happens when the patient lacks the specific vocabulary the AI expects. A truly collaborative system must be designed to notice these gaps and make its own uncertainty visible to the doctor.

The privacy-explainability tension

Bangladesh is currently developing its digital health infrastructure and data protection laws. Our hospitals cannot simply pool patient records into a central server for legal and ethical reasons.

Federated learning offers a solution by allowing hospitals to train shared models without moving raw patient data. My work on the MedHE framework uses encryption to create a "fortress" for patient privacy. However, there is a hidden cost: the "noise" used to protect privacy can sometimes hide rare diseases or the early signals of an outbreak.

When we build or deploy these models, we should not only ask if they are accurate. We must ask if they listen. We must ensure that when a patient’s life depends on it, the doctor remains the final authority, supported by a partner that knows its own limits.

In Bangladesh, if privacy settings are too aggressive, a dengue early-warning system might fail to detect a cluster in a remote district because that cluster appears as statistical noise. We need "equity-aware" privacy. The goal should not be mathematical purity, but clinical truth for the most vulnerable populations.

Bias as a local reality

We often treat algorithmic bias as a technical bug, but it is actually a reflection of the training environment. An ECG-based heart disease predictor trained mostly on men may systematically under-diagnose women who show different symptoms.

In my research on fairness-aware representation learning, we train models to ignore sensitive characteristics like gender or income when they lead to discriminatory outcomes. For a co-clinician to work in Bangladesh, it must be evaluated on our own patient populations and our own languages. Fairness cannot be an afterthought.

Bangladesh as a research partner

Our country is not a passive recipient of Western technology. Local researchers are already building solutions that fit our specific context.
In a study I conducted with 14 healthcare professionals in Bangladesh, we found that clinicians strongly preferred "hybrid" explanations. These systems combine data-driven insights with established medical rules. More than half of the clinicians expressed trust for actual clinical use because they could see the logic behind the suggestion.

From multilingual triage apps to AI-assisted referral systems for pregnant women, a local ecosystem is growing. Our policymakers should treat Bangladesh not as a testing ground for foreign models, but as a partner with its own research capacity.

A roadmap for the future

To move forward, I propose five essential steps for integrating AI into our clinics:
1. Demand arguability: Systems must allow doctors to contest reasoning and see structured uncertainty. A doctor should be able to argue with the AI and win.
2. Local fairness benchmarks: We need our own test sets and definitions of what constitutes a harmful error in a rural setting.
3. Equity-aware privacy: We must ensure that privacy protections do not erase the signals of marginalized communities or rare conditions.
4. Invest in collaboration research: We need large-scale trials that measure trust and workflow integration, not just technical accuracy.
5. National AI audit body: A technical and ethical board should monitor algorithms before and after deployment to prevent bias or privacy breaches.

Also Read

It's about the patient's life

I became a research scientist because I wanted to build technology that finally hears the voices I heard in those Satkhira queues. The DeepMind co-clinician is an impressive tool, but its success in Bangladesh will depend on whether it can adapt to us.

When we build or deploy these models, we should not only ask if they are accurate. We must ask if they listen. We must ensure that when a patient’s life depends on it, the doctor remains the final authority, supported by a partner that knows its own limits.