AI and the conceptual fragmentation of south asian journalism
Algorithms are not just changing how journalism is made in South Asia. They are quietly dissolving what journalists believe they are making it for.
AI and the conceptual fragmentation of south asian journalism
Algorithms are not just changing how journalism is made in South Asia. They are quietly dissolving what journalists believe they are making it for.
Consider a specific kind of silence. In Bangladeshi digital newsrooms, it has become common practice over the past two years to optimise story headlines not for editorial clarity but for platform engagement signals — the algorithmic preferences of Facebook and YouTube, which remain the primary distribution infrastructure for most Bengali-language news.
Editors do not announce this. It is rarely written into policy. It happens through the quiet accumulation of feedback: a headline rewritten because the previous version “didn’t perform,” a story deprioritised because its topic “doesn’t do numbers.” The algorithm does not give orders. It simply makes certain choices feel rational and others feel futile.
I observed this pattern closely during my time reporting on governance and digital rights, and later when training journalists in digital verification and open-source investigation. What struck me was not that commercial pressure was shaping editorial decisions — that is old news in any media system.
What was new was the invisibility of the mechanism. When an owner or advertiser kills a story, journalists know it has been killed. When an algorithm systematically demotes a category of reporting — accountability journalism, stories without shareable emotional triggers, investigations that require days to verify — it simply disappears from the metrics. Nobody has to say anything.
This is the conceptual fragmentation I want to examine. Across South Asia, AI-driven systems are not arriving in newsrooms as declared transformations. They are arriving as infrastructure — as the taken-for-granted environment within which journalistic decisions are made. And in that quietly assumed role, they are reshaping not just practice but the underlying idea of what journalism is supposed to do.
The metrics are not neutral
To understand how this works, it helps to be precise about what algorithmic optimisation actually measures. Engagement metrics — shares, clicks, dwell time, emotional reactions — are proxies for attention capture, not for informational value or democratic function. A story that provokes fear or outrage will consistently outperform one that provides careful context. A story with a false but emotionally resonant claim will circulate more widely than its correction. These are not bugs in the system. They are features of an optimisation function designed for platform growth, not public knowledge.
South Asian journalism enters this environment in a structurally weakened position. Press freedom indices across Bangladesh, India, and Pakistan have declined significantly over the past decade. Investigative journalism is expensive, legally precarious, and commercially marginalised. In this context, the promise of algorithmic distribution — reach without the cost of traditional circulation — is genuinely attractive, especially for smaller outlets operating in Bengali, Urdu, Sinhala, or Tamil. The platforms offer audience access that print never could. The price is a gradual subordination of editorial logic to engagement logic.
The algorithm does not tell journalists what to write. It tells them, through accumulating data, what kind of journalism survives.
The result is what I would describe as a quiet recalibration of professional purpose. Journalists working under algorithmic distribution do not typically abandon their values consciously. They adapt, incrementally, to what the environment rewards. The cumulative effect is a journalism increasingly shaped by the preferences of attention-maximising systems, rather than by the informational needs of democratic publics.
Generative AI and the verification collapse
Generative AI has added a second, more acute dimension to this fragmentation. There has been some academic work on how AI-generated misinformation is received by audiences in Bangladeshi digital news ecosystems — specifically, whether audiences can distinguish algorithmically synthesised content from verified reporting, and how that uncertainty affects their trust in news more broadly.
The preliminary findings are troubling, though not surprising. Synthetic content — fabricated quotes attributed to public figures, AI-generated images presented as news photographs, algorithmically produced voice clippings mimicking politicians — circulates through WhatsApp and Facebook at speeds that outpace any plausible verification response. During Bangladesh’s 2024 political transition, several instances of AI-generated video content attributing fabricated statements to senior figures spread widely before fact-checkers could respond. By the time corrections circulated, the original content had already shaped the terms of public discussion.
What is conceptually significant here is not just the scale of the problem but its effect on journalistic epistemology — on the working practices through which journalists establish what is true before publishing. Traditional verification rests on a relatively stable assumption: that primary sources, documents, and visual evidence can be authenticated. Generative AI attacks that assumption directly. When a video may be synthetic, a document may be fabricated, and a voice recording may be algorithmically produced, the epistemic tools of the newsroom require fundamental rethinking.
For South Asian journalists, this rethinking is happening without adequate institutional support. The verification tools capable of detecting sophisticated synthetic content — forensic analysis software, access to original metadata, cross-referencing with corroborating sources — are expensive, technically demanding, and unevenly distributed. A correspondent for a well-resourced English-language daily in Dhaka or Delhi has access to resources and training that a reporter for a vernacular outlet in a secondary city almost certainly does not. The result is a widening gap, not just in production capacity but in epistemic capability: the ability to know what is true.
A fragmentation that is also a stratification
These two dynamics — the algorithmic reshaping of editorial logic and the synthetic erosion of verification — converge on a single structural problem: the fragmentation of South Asian journalism is also, and perhaps more importantly, a stratification.
At one end of the spectrum, a small number of well-resourced outlets are beginning to engage critically with AI as a tool — deploying it for data analysis, cross-language translation, and pattern detection in large document sets, while investing in digital literacy training for their staff. At the other end, a much larger number of outlets, particularly those serving vernacular audiences with limited infrastructure, are experiencing AI not as a tool they deploy but as an environment they inhabit — one shaped by others, optimised for others’ interests, and resistant to their interrogation.
This stratification matters for South Asian democracy in a specific way. The vernacular press — Bengali, Hindi, Urdu, Tamil, Sinhala, Nepali — reaches the majority of South Asia’s news-consuming public. It is, in the most direct sense, the journalism of democracy. When that journalism is algorithmically shaped toward engagement rather than accountability, when its capacity for verification erodes, and when its journalists lack the conceptual or technical vocabulary to understand what is happening to their practice, the consequences extend well beyond the newsroom.
What a grounded response requires
I want to be precise here about what I am and am not arguing. I am not technophobic. AI, carefully and critically applied, offers real possibilities for journalism in South Asia: faster translation of multilingual source material, computational assistance in identifying patterns across large datasets, and tools that could, in principle, extend the reach of accountability reporting. The question is not whether these technologies enter South Asian newsrooms — they already have. The question is whether they do so on terms that journalists understand, can contest, and can shape.
That requires, first, that journalism educators and institutions in the region treat algorithmic literacy as a core professional competency — not a specialist skill but a foundation, as fundamental as knowing how to structure a story or verify a source. It requires, second, that South Asian press freedom organisations, researchers, and civil society develop AI ethics frameworks grounded in regional contexts, languages, and legal traditions, rather than importing wholesale the frameworks developed for media environments that bear little resemblance to Dhaka, Lahore, or Chennai.
And it requires, perhaps most urgently, that journalists themselves recover the conceptual authority to define what their work is for. The metrics are not neutral. The platforms are not neutral. The models on which generative AI systems are trained are not neutral. Accepting the algorithmic environment as simply given — as the natural condition of contemporary media rather than a designed system reflecting particular interests — is itself a form of professional abdication.
The fragmentation described here is not the result of technology acting on journalism. It is the result of choices — by platforms, by media owners, by governments, and by the journalism profession itself.
South Asian journalism has survived, and occasionally defied, decades of political capture, economic precarity, and institutional pressure. The current challenge is different in form but not entirely in kind. It asks the same question that every previous challenge has asked: who does journalism serve, and who gets to decide? The difference now is that the mechanism enforcing one answer to that question is invisible, fast, and mathematically fluent. Meeting it requires journalists who are all three things too.