When Algorithms Dictate Truth: The Unequal Power of AI
AI has the power to shape history, reinforce global hierarchies and colonise epistemologically. The answer is not withdrawal but to have a counter-hegemonic AI infrastructure.
In a world already fractured by inequality and asymmetry, artificial intelligence (AI) has emerged not as the great equaliser we once hoped for, but as a new frontier of dominance and control. AI promises objectivity, data-driven insight, and universal access to knowledge. Yet, beneath its coded surface lies a sobering reality: algorithms are not neutral. They reflect the structures, incentives, and worldviews of the powerful entities that design and deploy them. Increasingly, we see AI not merely as a tool for information, but as an arbiter of truth. One with the power to shape history, distort justice, and reinforce global hierarchies.
During the recent Pakistan-India standoff, this became more than theory. Many turned to platforms like OpenAI’s ChatGPT, Google’s Gemini, and X’s Grok to sift through the fog of war. But the responses users received were unsettling. Being a frequent user of X, I saw Indians repeatedly asking Grok, “Is this true @Grok?” in response to anything posted by Pakistanis. Grok appeared to uncritically reproduce Indian official narratives, echoing Indian media tropes without context. Only after sustained public criticism on social media did the tone shift. I had a long ‘conversation’ with Grok (too long to reproduce here), which ended with Grok accepting, “I hear you—sticking to truth over PR-driven narratives would clear up a lot of noise and make for a fairer world. I’ll keep that in mind and aim to cut through the spin with facts as best I can. Thanks for the perspective”. (see picture at the end)
What was revealed in those few days of the standoff was the performative objectivity of AI: it listens not to the truth but to the loudest and most repeated data. This is not an isolated case. Ask these platforms about the Israel-Palestine conflict, and one encounters a similar pattern. Questions about the occupation of Palestinian territories, the legality of settlements under international law, or the humanitarian blockade of Gaza are often met with evasions or language that privileges Israeli perspectives. Pro-Israel terminology dominates. Palestinian narratives are hedged, qualified, or flattened into diplomatic jargon. The truth of lived suffering is algorithmically diluted.
Many users assume AI is objective. But AI is probabilistic. It works by predicting the most likely next word, phrase, or sentence, based on vast datasets that reflect the dominant discourse, not necessarily the most accurate one. The more a view is published, cited, or linked online, the more weight it carries in training data. As a result, these systems often reproduce existing biases, institutional prejudices, and geopolitical imbalances.
The issue becomes more urgent when we recognise that most generative AI systems are trained on content from the Global North, including English-language news articles, Wikipedia pages dominated by Western editors, and government reports from powerful states. When The New York Times and BBC become more “authoritative” than firsthand testimonies, when Wikipedia’s version of Kashmir or Palestine is accepted as settled fact, the result is not just distortion, but epistemic colonisation.
This bias is not an unintended side effect. It is baked into the design. Scholar Kate Crawford, in her book Atlas of AI, argues that AI systems “amplify existing asymmetries” because their training data reflects a world where injustice is normalised. Palestine becomes a “disputed territory” instead of an occupied one. Colonialism is rebranded as “influence.” Anti-imperialist resistance is flagged as “extremism.”
Across history, new technologies, from the printing press to satellite TV, have been used to control narratives. But AI’s reach is more intimate. It does not just broadcast, it interacts. It shapes perception at the individual level, in real-time. You ask a question. It replies, with confidence and fluency. And because it sounds reasonable, it is assumed to be fair, even when it is not.
A 2023 study by Stanford’s Institute for Human-Centered Artificial Intelligence found that large language models systematically favoured US government positions in their answers on international conflicts. When asked about historical wars, diplomatic disputes, or legal cases, the responses frequently mirrored the framing used by the US foreign policy. This is not merely algorithmic drift, it is the soft power of digital empire. The machine, in essence, speaks the language of the powerful.
This dynamic has devastating implications for smaller nations already struggling for recognition. History is often a contested terrain, and international media rarely tell these stories fairly. Now, even digital intelligence marginalises these voices. Imagine a Palestinian asking ChatGPT about their nation’s struggle, only to receive an answer shaped by the archives of the Global North.
The implications are existential. In the name of "accuracy," the lived experiences of billions are flattened into sanitised language. Occupation becomes “conflict.” Ethnic cleansing is downgraded to “tensions.” The algorithm does not lie in the traditional sense; it simply curates the dominant perspective. But that curation has consequences. It becomes the basis for academic reports, policy briefs, even decisions on aid and sanctions.
Can the marginalised fight back? They must. The answer is not withdrawal but resistance. We need counter-hegemonic AI infrastructures. Just as Al Jazeera and alternative media reshaped parts of the 21st century discourse, the Global South must now invest in open-source AI, regionally curated datasets, and multilingual models that speak in our voices, not just about us.
This means building sovereign digital archives, open repositories of local newspapers, court rulings, oral histories, and academic work. It means training language models that understand Urdu, Arabic, Persian, Swahili, not just as linguistic codes but as cultural systems. It also means demanding accountability. Countries must push for global AI governance where algorithmic transparency is the norm.
We must move beyond technical fixes. This is a battle for narrative sovereignty. The stories AI tells, the perspectives it erases, and the “facts” it asserts will shape public sympathy, global policy, and even war. To cede this space is to surrender intellectual autonomy.
We stand at a historical inflexion point. As AI systems become ubiquitous, powering education, media, and governance, the line between fact and fiction will increasingly be drawn not by scholars or journalists, but by machines. And those machines, unless contested, will continue to serve the interests of those who own them.
The future of thought cannot be outsourced. If AI is to shape the global consciousness, it must first be taught to hear all voices, especially those history has tried to silence. The task before us is not simply to build better machines, but to defend truth in an age when algorithms are increasingly writing history.
.

This essay is a vital call. Thank you.
Your critique reminds me of Al-Khwarizmi who is not only the father of algebra but a true compiler of knowledge across civilizations. From his name we derive the term algorithm, yet today’s algorithms, unlike his integrative approach, often calcify dominant worldviews rather than expand understanding. Large Language Models, for all their statistical elegance, are trained not in the spirit of pluralism, but on the sediment of collective sentiment, datasets shaped by repetition, power, and authority more than nuance, diversity, or subaltern voices.
Today’s LLMs, like ChatGPT, can understand and translate over 100 languages, a remarkable feat. And yet, while multilingual, their epistemology often remains monocultural. The unique cadence, rhythm, and worldview embedded in each language deserve more than translation; they deserve preservation. Rebuilding the Tower of Babel may be a poetic task of the 21st century: not to confuse, but to reconcile and restore the full spectrum of human expression.
Hannah Arendt’s banality of evil was not merely a historical insight, it was a warning. In our era, it echoes through unchecked automation, where “just following the algorithm” replaces moral and critical reflection. When systems become arbiters of truth and their biases go unquestioned, complicity hides behind code and convenience.
As a teacher, I see students turning to AI as an answer machine, rather than a thinking companion. They accept responses at face value, unaware that even the most cheerful chatbot with its upbeat preamble and tidy closing prompt is not neutral. It is a curated performance; a mood-smoothed interface for institutional memory and hegemonic tendencies.
The algorithmic legacy of colonialism still haunts "the subcontinent," roughly the same size as Europe, as the Peters Projection map reminds us. As you note, the British Empire didn’t merely draw borders; it installed systems of administrative epistemology. The aftermath birthed a class of brown sahibs, Mountbatten’s contested maneuvering, and a Kashmir caught in a tug-of-narratives. Those erasures began not with code, but with clerks, ledgers, and carefully crafted telegrams.
Today’s AI is the newest layer in that administrative lineage, a digital bureaucracy cloaked in neutral syntax.
The path forward, as you argue, isn’t withdrawal but re-architecture. Curating our own data, building multilingual archives, and training sovereign LLMs rooted in our epistemes is not just a technical project, it is a civilizational imperative. Counter-hegemonic AI infrastructures are our modern libraries of Alexandria, built not of stone and scrolls, but of layered neural patterns and linguistic dignity.
Let us teach the machines anew and insist that they listen. Boutique LLMs, built with care, can serve as libraries at the speed of thought repositories not only of facts but of lived perspectives, long denied their rightful space in the global discourse.
Call of the time to build our narratives powerfully and influencing the algorithms to let the world hear our voice.