Artificial Intelligence based technologies are being hailed as great equalisers in education, capable of personalising instruction, bridging access gaps and accelerating literacy. However, considering that current AI systems reinforce existing inequities, unless they centre local realities, we wonder if the equaliser effect holds true for Pakistan. Furthermore, how can emerging technologies actively support educators and learners in ways that ground innovation in context and culture, rather than reinforce inequity?

Manail Anis, researching the effects of technodeterminism on the replacement of communal learning, examines these assumptions more closely. She asks, “For whom is AI designed for and who does it leave out?” Manail, too often, finds that the answer lies in the infrastructure and assumptions of industrialised Western countries where such AI tech is designed. AI tutoring platforms and ‘personalised’ learning solutions are built on datasets, languages and pedagogical models that reflect particular cultural and economic realities. They are not at all reflective of our reality in Pakistan: the crowded urban classrooms, the under-resourced rural schools or the communal madrassas that we ignore at our own peril.

This mismatch is not simply about technical details. Manail argues that Pakistan suffers from fragile internet connectivity — leading to unreliable access to online resources — while struggling with other limitations, including funding, infrastructure and teacher (un)availability. These challenges are urgent and deeply consequential for learners in places where education is a profoundly social endeavour, dependent on strong community and interpersonal connections. Yet, the relentless hype and rhetoric around AI dangerously flatten these realities, treating literacy and learning as merely technical problems to be solved with an app or chatbot. This is technological determinism: the misguided assumption that technology alone, without consideration of critical human and social factors, can determine outcomes. The belief that tools alone can transform learning, regardless of context, is not only misguided, but risks real and immediate harm to educational progress.

This framing of technology determining learning outcomes erases learners’ voices and needs. When AI-driven education policies reach the Global Majority, they are typically imported narratives from the Global North, promising personalisation and innovation, yet remaining remote because they rarely begin from our context. Our central argument is that AI is not neutral and its actual impact depends on whether it recognises the diverse, socially rooted classrooms at the heart of Pakistani education.

This dynamic reveals a more profound paradox. In her research on integrating AI as a 3rd voice in tech-enabled ESL classrooms, Saman Zahid advocates that while AI’s marketing promises faster learning, greater access and personalisation, it masks a glaring truth: this supposed bridge to a standardised body of knowledge often becomes an immediate barrier in contexts like ours. The archive that AI is trained on has gaps that we, as educators in the Global South, ought to fill. AI in education thrives on English-only archives, exam-driven pathways and Western datasets, forcing students worldwide to conform to a single output standard and alienating their individual preferences. There is a critical paradox here: while our learners can now access AI, AI still cannot access them. Their languages, stories, cultural context, and circumstantial limitations — the breadth and width of their daily struggles — are invisible. Multiculturalism, an asset, is therefore treated with alarming urgency as a problem to be fixed.

The belief that tools alone can transform learning, regardless of context, is not only misguided, but risks real and immediate harm to educational progress … Our central argument is that AI is not neutral and its actual impact depends on whether it recognises the diverse, socially rooted classrooms at the heart of Pakistani education.

Given these challenges, what might contextual AI look like? Here, Saman Zahid’s research on the ethical use of AI in multicultural classrooms offers some insight. As part of her ongoing research on contextualising AI in the classroom, Saman suggests adopting a multilingual AI by default: socially embedded; grounded in local knowledge; teacher-centered rather than teacher-replacing.

Saman Zahid argues that bias in AI-generated knowledge that reflects a ‘standard’ of knowledge as the ‘truth/fact’ is not an accidental byproduct of using AI; it is an intentional outcome of using AI as the third voice in education. AI systems are trained far from our classrooms and are built on Western epistemologies. That silence matters. Students internalise hierarchies about what constitutes valid knowledge.

Her research shows that, in policy talk of ‘standardisation’, educational contexts are flattened.  Madrassa learning, oral traditions and indigenous histories disappear. A government school in Karachi, a madrassah in Mangla and an elite English-medium school in Lahore are distinct, yet AI treats them as similar ‘users’ and ‘consumers’.  Even with local context in mind, this is insufficient without true co-creation in the design and development of AI, not just in its deployment. Learners should not just be users; they could be co-authors, shaping systems that reflect their own realities.

This brings us to a crossroads: AI presents both opportunities and risks. Many current tools reproduce monolingual, exam-driven models. They require students to leave their identities behind, treating Urdu, Punjabi, or regional languages as obstacles rather than scaffolds.

To move forward, Saman Zahid’s work on ethical use of AI as a co-creator of AI suggests this use as a kind of third voice in the classroom, not replacing teachers or learners, but rather mediating between them. AI could recognise patterns in how a student moves between languages, provide timely feedback and make translanguaging visible as a resource rather than a weakness.

At the same time, we know that AI cannot replace empathy, creativity or the cultural grounding that teachers provide. But, it might extend their capacity if designed with context in mind. The idea is whether AI in language learning will simply continue on extractive patterns or whether it can truly serve as a bridge between languages, between learners and teachers, as well as between local classrooms and global knowledge.

AI in education thrives on English-only archives, exam-driven pathways and Western datasets, forcing students worldwide to conform to a single output standard and alienating their individual preferences. There is a critical paradox here: while our learners can now access AI, AI still cannot access them.

This possible bridge highlights the urgency: AI in education is no longer a distant concept; it is already reshaping classrooms globally. The question before us is not whether, but how to act. If we let AI enter our education system without truly hearing from those who face its daily realities — the learners, the teachers, the parents and caregivers who stretch every rupee for school fees — we risk repeating mistakes that have long left the majority behind.

Designing AI products with these voices at the table is about much more than inclusion. It is about dignity and relevance. As a key focus of her practice and research, Aanya Niaz reflects on questions of learner agency in the use/consumption of AI. She charts the contours of AI from the user's perspective, suggesting the creation of AI that is inclusive by default. When a student in a government school in Sindh or a low-cost private school in Punjab opens a digital learning app, they should see something that speaks their language, reflects their environment and matches their pace, because their reality matters. A child learns faster when the AI calls out names that sound like their cousins, cracks jokes they’d hear in their mohalla and speaks in the rhythm of their own dialect, because unsurprisingly, learning only sticks when it feels like it belongs to you.

When parents, teachers and learners themselves are invited into the design room, AI stops being a ‘foreign import’ and becomes a local tool. Aanya argues that it’s not only students who belong in the AI design room, but also their parents, who are the real decision-makers in many Pakistani households. They don’t need résumés or to tick donor boxes to earn a seat. Their lived realities are the expertise. The same applies to educators, many of whom harbour the silent fear that AI is poised to replace them, especially in contexts where reliable information about these technologies is scarce. Bringing them into the design space does something unexpected: it transforms fear into a sense of authorship. When teachers and parents shape the tools, they stop bracing for disruption and start steering it. That shift, from being potential casualties of change to co-creators of it, is what genuine innovation in education could look like, and importantly, what it should look like.

The payoff for genuine co-design is transformative. Products grounded in users’ realities, built with trust, improve learning outcomes as teachers and students see these tools as allies. If AI in Pakistani education is to be meaningful, as per Aanya, it must serve those who have been historically left out, and not just serve the elite. That is the argument we advance: centering AI on the margins creates actual value.

This possible bridge highlights the urgency: AI in education is no longer a distant concept; it is already reshaping classrooms globally. The question before us is not whether, but how to act. If we let AI enter our education system without truly hearing from those who face its daily realities, we risk repeating mistakes that have long left the majority behind.

All three perspectives on AI here seek to create an urgency that compels us to prioritise context for Pakistan now, not later. This reflection is only a beginning; the stakes are immediate. AI may dominate the global zeitgeist, but here in Pakistan, its success or failure hinges on whether it honours the daily choices of parents balancing school fees and groceries, the resilience of teachers holding overcrowded classes together and the ingenuity of students weaving multiple languages into thought and learning. Our response cannot wait.

Pakistan has a chance to rewrite the global script: to demonstrate that dignity and agency are not afterthoughts, but the foundation of real innovation. AI could become more than a digital tutor. It could act as a mirror, reflecting the brilliance of our multilingual, uneven, resource-stretched but deeply resilient classrooms, proving that intelligence is not the monopoly of datasets built elsewhere.

This is what drives us forward. The true equaliser for education in Pakistan will not be AI alone, but rather our willingness to let our classrooms — imperfect yet resilient — lead the way. By centring local voices and honouring the lived realities within our education system, we can ensure AI becomes a tool for empowerment rather than exclusion. The future of learning in Pakistan depends on our resolve to demand dignity, agency and relevance at every step.

Manail Anis Ahmed

Manail Anis Ahmed is a lecturer at John Hopkins University and a member of the World Economic Forum AI Governance Alliance.

Aanya F. Niaz

Aanya F. Niaz is a global leader in AI and Human Flourishing, and a PhD candidate at the University of Cambridge. More at www.aanyafniaz.com

Saman Zahid

Saman Zahid is a Learning XP Lead for English as a Second Language Programs at Noon Academy (KSA).