Achilleas Kostoulas

Published

Revised

AI in Language Education: Notes from an International Panel

Rapid adoption of AI is not the same as thoughtful use. Reflecting on an international panel discussion, this post explores what AI asks in terms of learning, judgement, ethics, and accountability.

Hand writing A.I. with chalk on a classroom blackboard beside books, representing thoughtful AI use in language education.

AI in Language Education: Notes from an International Panel

On 23 April 2026, I joined a panel organised by the I-LanD Interuniversity Research Centre, entitled Artificial Intelligence in English Language and Translation: Bridging Research and Practice. The event, convened by Francesca Raffi of the University of Macerata1 in Italy, focused on the growing role of artificial intelligence in language professions and raised some implicit questions about the increasing demand for thoughtful judgment in AI-assisted practice.

Francesca framed the meeting by noting that “AI is not just entering language education and translation, but it’s also starting to reshape both”. This is, perhaps, a much more important point than it might initially seem. When we think about tools, our focus is on the technology of AI-assisted language education. When this shifts to how language education is changing, we are no longer talking about its design; we are talking about what it does and, ultimately, what it is for.2

What I would like to do in this post is revisit the panel contributions and tease out what they suggest about a deeper question: not how AI is being used, but what its growing presence asks of teachers as professionals who have traditionally grounded our work in thoughtful judgement, relationships, and accountability. This is, to be transparent, my own reading and selective retelling of a shared conversation, and I do not doubt that others present would narrate it differently. It is, then, less a report than an interpretation, but I would nevertheless invite you to read on.

Pedagogical judgement:
What counts as learning?

One of the sharpest questions of the day came from Ezgi Aydemir Altaş (Yıldız Technical University, Turkey), who works on AI literacy and educator agency. She asked: if students submit fluent, coherent, grammatically accurate texts that they cannot explain the next day, has learning taken place?

If students submit fluent, coherent, grammatically accurate texts that they cannot explain the next day, has learning taken place?

You might be tempted to read this as a rhetorical question, but I think that it is deeper than that. What Ezgi was doing, I think, is raise a point we overlook about how we assess linguistic development, and –ultimately– about what we think language learning actually is. The text is not the end product of learning: it’s the proxy that shows (?) how much learning has taken place. But if AI unsettles assessment, it also unsettles the core of our professional practice: how we recognise learning?

Ezgi also drew a useful distinction between cognitive offloading, which means using technology to extend thinking, and cognitive surrender, i.e., delegating thinking to the technology. This, however, seems easier to describe than to locate in practice. The difference often lies in what the learner brings to the interaction and what they take away from it. That is an uncomfortable place for AI to sit: it cannot guarantee its own pedagogical value.

Contextual judgement:
How do we interpret situated meaning?

The next contribution, by Elif Buber (Center for Innovative Education, USA), addressed what she termed contextual AI. Briefly, contextual AI involves systems that can attend to learner diversity, communicative purpose, and cultural setting, rather than treating these as noise around an otherwise clean processing task. The use of artificial intelligence to attend to these topics is promising, but not unproblematic.

The difficulty, I think, is that language is constitutively contextual. The same sentence can carry different pragmatic weight in different settings, and a translation can be technically accurate while being communicatively disastrous. What makes this especially challenging is that the relevant context is often tacit, relational, and historically situated in ways that resist easy formalisation. Teachers develop sensitivity to this over time. Whether AI systems can acquire anything similar seems to me an open question. Buber’s contribution raised it without pretending to settle it.

Ethical judgement:
Who benefits, who is excluded, who is accountable?

Alessia Chiriatti (Istituto Affari Internazionali, Italy) brought a perspective whose importance I find particularly significant: the educational technology discussion frequently brackets questions of power. Even before the advent of AI, some people have looked to digital technology to provide scalable solutions to structural inequality in education, i.e., inexpensive ways to deliver instructional material to people for whom we are less willing to invest in education. However, access to lessons or instructional material does not equal quality education. More importantly, the gap between availability and quality is often least visible precisely where it matters most.

Even before AI, digital technology was seen as a scalable solution to structural inequality in education.

Drawing on contexts such as Syria and Turkey, which face the challenge of educating large numbers of displaced learners, she argued for treating AI not only as a pedagogical tool but as a governance instrument. A governance instrument is (or at least I understand it to be) a technology that distributes access to education, shapes what counts as learning, and determines who receives what kind of educational experience.

The distinction is not merely theoretical. Technologies introduced under the banners of access and efficiency can simultaneously reproduce structural inequalities, displace institutional responsibility onto individuals, and operate with limited accountability in fragile or crisis-affected settings.

Professional judgement:
When speed increases, what happens to quality?

Marián Kabát (Comenius University Bratislava, Slovakia) made an observation that I have been thinking about since: translation has historically been a slow process, whereas AI presses toward making it fast. That tension between speed and expertise is not limited to translation, but it is perhaps sharpest there, because translation has a long tradition of understanding slowness as a professional virtue, weighing nuance, attending to ambiguity, and taking responsibility for choices.

Kabát also raised a methodological puzzle that applies well beyond translation: if a system generates multiple different outputs from the same source, which one counts? How do we evaluate quality? These are not technical problems with technical solutions. They reach into what professional trust is based on and who is accountable when it breaks down.

Institutional judgement:
What principles should guide adoption?

When my turn came to speak, I tried to focus on a question that the earlier contributions had sharpened rather than settled: what does it mean to use AI well in language education, as opposed to merely using it efficiently? The preceding speakers had, between them, unsettled our confidence in what counts as learning, whether AI can read context, who bears the costs of adoption, and what professional accountability looks like when outputs accelerate. Any principled answer has to start from those complications, not from the assumption that good intentions are sufficient.

The answer I have been working toward, in the AI Lang project I run with colleagues at the European Centre for Modern Languages, revolves around four principles: that AI use should be safe, responsible, purposeful, and reflective. I have written about these before, and I will not rehearse the full framework here.3

What I want to note is that the seminar itself put pressure on those principles in useful ways. Safe is easier to assert than to enact when, as Alessia’s contribution reminded us, the most vulnerable learners are also those with the least access to redress. Responsible looks different when the environmental and linguistic costs of AI systems are unevenly distributed across educational contexts. Purposeful depends on having a clear account of what we want learning to accomplish, and Ezgi’s question makes that harder to take for granted. And reflectiveness, I increasingly think, is not just a competence to be developed but a structural condition to be created: teachers need time, institutional support, and collegial space to think carefully, not just the personal disposition to do so. None of these pressures undermines the framework. They are, in a sense, what it is for.

Thoughtful Use, Not Shallow Adoption

The conversation returned, in different ways, to a distinction I find useful: not between those who embrace AI and those who resist it, but between shallow adoption and thoughtful use. If AI is reshaping language education and translation, then the central task is not resisting change or accelerating it. It is preserving the forms of judgement on which both professions depend.

That, I think, is precisely the kind of work that this moment requires.

By subscribing to this blog, you will receive occasional updates on topics relating to language education, including my ongoing work on AI in language teaching and learning and on the research literacy of language teachers. (privacy policy)

Footnotes

  1. I was very surprised, by the way, to read that the University of Macerata is one of the oldest universities in Europe, founded in 1290. ↩︎
  2. Some readers might recognise that the framing theory here comes from Daniel Dennett’s book, The Intentional Stance (1987). Dennett distinguished between three ways of looking at systems: from a physical perspective, noting their mechanics; from a design perspective, looking at what they do; and from an intentional perspective, by examining what they are for. ↩︎
  3. Besides, we might be making more changes to this, replacing the principle of ‘reflectiveness’ with ‘human-centredness’, with two guidelines focusing on students and teachers respectively. ↩︎

You may also enjoy the following posts, which report on our ongoing work on artificial intelligence in language education

Achilleas Kostoulas

About this post

Discover more from Achilleas Kostoulas

Subscribe now to keep reading and get access to the full archive.

Continue reading