Achilleas Kostoulas

Published

Revised

AI in ELT: Two Possible Futures

AI is reshaping language education, challenging teacher agency, destabilising assessment, and exposing geopolitical biases. This post examines the stakes and argues for a humanistic, intellectually grounded ELT future.

Magnifying glass focusing on the AI brain symbol with circuit lines, representing the concept of artificial intelligence, machine learning, and digital innovation on a blue background.

AI in ELT: Two Possible Futures

It would be an understatement to say that Artificial Intelligence (AI) has settled into language education faster than anyone had expected. Tools that were novel two years ago now feel ubiquitous: chatbots practising pronunciation, automated feedback on writing, AI-assisted lesson planning, translation everywhere. And while we, as a discipline, have been good at generating blog posts, webinars, and a fair dose of hype –alternating naive and uncritical optimism with panic–, we seem to have been slower in developing more substantial work, such as a comprehensive, evidence-informed overview of how AI is actually used in language education and how teachers experience it.

The British Council’s 2024 report, Artificial intelligence and English language teaching: Preparing for the future, attempts exactly that. It synthesises a decade of research, a global survey of 1,348 English teachers, and 19 interviews with policymakers, academics, and EdTech leaders, to produce a rich picture. To be sure, it is a picture that’s uneven in places, occasionally aspirational, but nevertheless an important foundation for any discussion of AI in ELT.

In this post, I will attempt three three layers of engagement with the report: I begin by summarising what seem to me the most important findings of the report in a practical explainer for teachers; next, I offer a critical commentary; and finally, I attempt a synthesis of what all this might mean for the future of language teaching and learning.

What this report means for your classroom

If you’re a teacher, the report contains several insights that are immediately relevant to your everyday practice. Here are some of what I perceive to be the main takeaways.

Teachers are already using AI, but mostly for materials and practice

The survey shows that:

  • 57% use AI to create teaching materials
  • 53% use it to help students practise English
  • 43% use AI to create lesson plans
  • Only 23% use AI for grading
  • 20% feel adequately trained

In other words: the entry points are practice and preparation; assessment and pedagogy are still human-led.

What this means for you
AI is good at first drafts, not final products. It speeds up planning and generates ideas, but it does not understand your learners, your context, or your intentions. What makes a language lesson good is still your pedagogical judgement.


AI supports all four language skills, but more obviously the productive ones

According to the language teachers surveyed, AI can help learners develop:

  • Reading (79% agree)
  • Speaking (76% agree)
  • Writing (75% agree)
  • Listening (74% agree)

However, the research to date shows that the strongest empirical evidence regarding the effectiveness of AI supported pedagogy is for tasks such as pronunciation practice, vocabulary support, grammar feedback, lowering speaking anxiety and personalised practice.

What this means for you
AI is best seen as a practice partner, not a teacher. Particularly for speaking, it offers the “safe space” some learners need to experiment without judgement.


Learners must still learn to write without AI

70% of teachers believe students should be able to write independently of AI tools. This is an important message. Writing is not merely producing correct sentences; it is thinking on the page. Over-reliance on AI risks bypassing the cognitive processes that make writing educational.

What this means for you
Be explicit about when AI is allowed, when it is not, and what ‘using AI’ means (e.g., editing? brainstorming? rewriting?). Develop classroom routines that foreground process and writer agency, not the product / machine output.


AI might not replace language teachers, but it will redefine their role

A majority of teachers:

  • disagree that AI will replace them by 2035,
  • do not expect automated translation to make language learning obsolete,
  • but are concerned about the lack of training.

The report confirms this: AI will gradually absorb routine, repetitive, and formulaic tasks. It is unlikely to replace responsiveness, pastoral care, complex communicative teaching, contextual pedagogical reasoning and the the human relationship that makes language education meaningful.

What this means for you
Your future role will probably involve more coaching, more feedback, and more design of learning environments, rather than routine worksheet production.


Inclusion and ethics matter
as much as pedagogy

Some concerns shared by teachers and interviewees included: linguistic bias (e.g., privileging “standard” English), which may inadvertently marginalize students from diverse linguistic backgrounds and hinder their academic performance; data privacy, as the use of digital platforms often raises questions about how student information is collected, stored, and utilized; widening digital divides that highlight disparities between students who have access to technology and those who do not, threatening equity in education; opaque training data that lack transparency, potentially leading to biases in artificial intelligence applications used in classrooms; and the influence of Big Tech on curricula, which raises concerns about the prioritization of corporate interests over educational values and may affect the independence of educational institutions.

What this means for you
AI literacy is no longer optional. Learners need guidance not just in using AI, but in questioning, critiquing, and contextualising it.

Some critical points about
AI in Language Education

The British Council report is rigorous, far-ranging, and timely, so in writing what it is not my intention to downplay its strengths. However, like all ambitious projects, it has blind spots. So, in what follows, I will attempt to provide a critical reading, and raise some issues that I feel warrant more reflection.

AI is not a single, undifferentiated category

The report acknowledges definitional problems, but sometimes treats “AI” as if it were one thing. In practice, adaptive learning, machine translation, LLMs, and conversational agents are entirely different systems, with different affordances and risks. Each of these technologies operates under distinct parameters and methodologies, which can lead to varying outcomes and user experiences.

For instance, the primary purpose of adaptive learning systems is to personalize educational content based on individual learner needs, while machine translation focuses on converting text from one language to another, often struggling with nuances and cultural context. LLMs, on the other hand, generate human-like text using vast amounts of data but can sometimes produce unreliable or inappropriate output. Lastly, conversational agents interact dynamically with users, offering support or information, yet they may lack the depth of understanding that a human would possess.

Understanding these differences is crucial for stakeholders as they navigate the implications of these technologies in various domains.

Why this matters:
We cannot meaningfully discuss pedagogy, bias, or training needs until we differentiate between the underlying technologies.


Pedagogy remains the weakest part of the AI ecosystem

The report notes that AI tools still mirror behaviourism, grammar-translation tendencies, “lecture and quiz” models, and individualised skill practice, reflecting a persistent reliance on traditional teaching methods. This reliance underscores the broader issue in educational technology: technologies tend to amplify existing pedagogies rather than transform them, often replicating outdated practices instead of fostering innovative approaches to learning.

While there is potential for AI to revolutionize education by personalizing learning experiences and promoting critical thinking, the cautionary tale remains that without thoughtful integration, these tools may simply reinforce a status quo that limits broader pedagogical growth and the exploration of more dynamic, engaging instructional strategies.

Why this matters:
Language education has a rich tradition of exploring innovative pedagogical methods, often centering around the learning group ideal. The uncritical adoption of mainstream, and dated pedagogical models, risks disrupting this tradition.


Teacher training is a socioprofessional, not technical, issue

The report repeatedly notes that teachers “lack AI literacy,” but it treats this as a narrow, fixable problem: something that the profession might address by more webinars, better tutorials, and a list of classroom tips. What it does not interrogate is the deeper tension that AI brings to the surface:

Are teachers merely technicians delivering pre-packaged content, or are they intellectuals who interpret, shape, and critique educational practice?

Ignoring this fundamental question means that we, as a profession remain blind to questions such as:

  • What forms of pedagogical knowledge are eroded when lesson design is outsourced to systems optimised for efficiency rather than educational judgement?
  • What becomes of novice teachers, whose early professional years traditionally involve learning to reason pedagogically, not just to operate tools?
  • And crucially: who holds power in a classroom where the design of learning is increasingly influenced by commercial platforms rather than by teachers’ contextual, ethical, and disciplinary expertise?

Why this matters:
The concerns listed above are just questions about “training.” They are questions about professional autonomy. The change brought about by AI is not just about increasingly efficient workflows; it is about the core of our professional identity (technicians in someone else’s system vs. public intellectuals in a democratic society)


The geopolitics of AI remain under-discussed

While the report does hint at the mismatch between AI training data and the contexts where AI output is ultimately deployed, it stops short of connecting this to the long-standing geopolitical dynamics of ELT.

Critical approaches to our field have long shown that English language teaching is not merely a pedagogical enterprise. It is also a space where centre-periphery relations are reproduced, resisted, or renegotiated. What counts as “standard” English, which communicative practices count as legitimate, whose expertise people trust, and who gets to define quality, are some questions where the cultural and political influence of Anglophone centres has enjoyed dominant shaping roles.

In the context of AI-assisted pedagogy, these dynamics become more complex, by adding one more layer of nuance. As language teachers, we need to remain vigilant over questions such as:

  • Whose English do the models encode? We need to remain alert to the ways that AI systems privilege dominant varieties of English, normalise certain discourse styles, and erase legitimate local Englishes or translanguaging practices. All these practices reinforce Anglophone linguistic hegemony under the guise of “neutral” output.
  • Who gains power when pedagogical content is automated? In AI-assisted language pedagogy, power is consolidated in the hands of Big Tech companies whose priorities are, by design, commercial and scalable, not contextual or humanistic. When lesson content, assessment tasks, or explanations are authored by systems produced by a small number of corporations, we risk seeing a global standardisation of pedagogical discourse, and reduced visibility of local knowledge traditions and teacher-led innovation.

Why this matters:
Language education has always been political; AI does not make it less so. If anything, it makes the latent power dynamics less visible.


Assessment is a looming crisis

While the report mentions cheating, it touches only lightly on thorny issues such as the collapse of traditional writing assessment, the impossibility of verifying authorship, and the emerging “AI detection” pseudoscience. We are not merely dealing with isolated cases of misconduct; what we are confronting is a structural collapse of traditional writing assessment.

Faced with this, the profession seems to be experimenting with some very worrying trends, including:

  • A drift towards surveillance-based pedagogy, with technological solutions such as proctored exams, keystroke loggers, lockdown browsers, and even biometric monitoring;
  • A narrowing of writing tasks to what AI cannot easily do (yet), such as reflective accounts, experiential narratives etc. While such measures may be defensible in the short term, they risk constraining writing instruction to genre niches rather than the full repertoire of communicative competence learners actually need.
  • A shift away from process-oriented writing pedagogy. As language teachers prioritise in-class writing or timed assessments, this reduces opportunities for drafting, peer review, iterative feedback, and extended engagement with texts. Ironically, the very presence of AI (which could and should support process writing) seems to be leading to its marginalisation.

Why this matters:
We are approaching a moment where the assessment tail will wag the pedagogical dog.

What this means for ELT

Stepping back, the report offers a snapshot of a system under stress: technological acceleration colliding with pedagogical inertia, institutional caution, and global inequalities. I have often found it helpful to think about such situations from a complex systems lens, which affords me a way to understand how the system as a whole functions.

AI is neither an incremental update nor a revolution. It is a perturbation: an external shock that disturbs the existing dynamics of language education. In complex systems, perturbations do not determine outcomes; they destabilise trajectories, making certain futures more likely and others less viable. What follows is not linear progress but a period of turbulence, during which the system reorganises itself around new constraints, new affordances, and newly redistributed forms of power.

This destabilisation opens two broad, competing developmental pathways.

The path towards automation and technocratic governance

In this scenario, the pull of AI encourages institutions to adopt pre-packaged curricular templates, automated content pipelines, data-driven feedback mechanisms, and a logic of education defined by efficiency, measurability, and scale.

Here, teachers risk being positioned as technicians: operators of systems, supervisors of dashboards, custodians of machine-produced content. Pedagogy becomes procedural, modularised, and easily offloaded to platforms. Assessment narrows to what we can effectively monitor and reliably verify. The implicit belief is that it is possible to optimise complex educational work through algorithmic rationality.

The endpoint of this trajectory is a model of ELT that is tidy, governable, and scalable, but thinner, epistemically poorer, and more constrained in its capacity for humanistic or critical engagement.

The path towards humanistic amplification and teacher intellectualism

The second trajectory sees AI as an opportunity to reassert the distinctly human dimensions of teaching: interpretation, ethical judgement, contextualisation, relational work, and the intellectual labour of theorising practice.

In this pathway, AI is positioned not as a replacement for teacher expertise but as a catalyst that frees space for deeper engagement with learners, more ambitious pedagogical design, exploratory or project-based learning, reflective writing and meaning-making, and the development of new literacies that treat AI outputs as objects of critique, not unquestioned authority.

Here, teachers are not technicians but intellectuals: practitioners who interrogate, adapt, resist, and situate technological tools within broader educational commitments. AI becomes one voice in a polyphony of resources, not the conductor.

The endpoint of this trajectory is a more complex, dialogic, and critically aware ELT ecology: one that recognises the social, cultural, and political work of language education.

The stakes of the perturbation

Both trajectories are plausible. Both are already visible, though perhaps not equally so to everyone. The direction the field takes will not be decided by the technology itself but by the distribution of agency within the system:

  • Will institutions prioritise efficiency or pedagogy?
  • Will teacher education cultivate critical capacities or tool proficiency?
  • Will AI companies set the terms of engagement, or will educators articulate their own?
  • Will assessment regimes adapt to protect the integrity of pedagogy, or will pedagogy contort itself to satisfy assessment?
  • Will teachers be seen —and see themselves— as intellectuals shaping the future of their field, or as technicians executing someone else’s vision?

A perturbation is a moment of possibility. But it is also a moment of vulnerability. The field will settle into a new equilibrium, that much is certain. What remains open is which equilibrium we choose to construct.

Concluding Thoughts

The British Council report is a landmark publication: empirically rich, refreshingly multidisciplinary, and appropriately cautious. It captures a moment when ELT is renegotiating its identity under technological acceleration.

But to move forward, we need conversations that extend beyond “how to use the tools.” We need pedagogical imagination, ethical clarity, and a systemic understanding of how AI interacts with the social, cultural, and cognitive dimensions of language learning.

AI will change ELT — but the nature of that change is still up for negotiation. And it will be shaped not by the technology itself, but by the decisions teachers, institutions, policymakers, and learners make in the coming years.

Is AI going to replace language teachers?

Unlikely. AI automates routine tasks, but it cannot replicate the interpretive, relational, and ethical dimensions of teaching. The real risk is not replacement but role reduction: i.e., teachers being repositioned as content technicians rather than intellectual actors.

If AI is so powerful, why shouldn’t learners rely on it for writing?

Because writing is not merely a product; it is a cognitive and developmental process. Over-reliance on AI bypasses the reflective labour of shaping ideas, structuring arguments, or negotiating meaning—all central to learning. Process writing becomes impossible to assess if authorship cannot be verified, creating harmful washback on pedagogy.

What’s wrong with using AI-generated materials for language learning if they save time?
Are there benefits to AI in ELT?

Yes, provided it is used thoughtfully. AI can support personalised practice, lower anxiety in speaking, and generate low-stakes materials. But these benefits materialise fully only when teachers remain intellectual agents, not operators of industrialised pedagogy.

What is the biggest risk if the profession does not act?

That ELT will drift into a technocratic equilibrium where pedagogical purpose is subordinated to efficiency metrics, assessment becomes surveillance-led, and the intellectual autonomy of teachers diminishes. The future is not pre-determined; it is a question of governance.

What can language learning institutions do right now?

They can: (a) create space for teacher-led experimentation, (b) protect process writing even when verification is difficult, (c) invest in critical AI literacy, (d) resist vendor-driven pedagogy and (e) and commit to transparency in technology procurement. Institutions must treat AI as a pedagogical concern, not merely a procurement decision.

Picture of Achilleas Kostoulas

About this post

Discover more from Achilleas Kostoulas

Subscribe now to keep reading and get access to the full archive.

Continue reading