Achilleas Kostoulas

Published

Revised

Epistemic Vigilance in Language Education: Thinking with Machines

When a conversation with ChatGPT about blog analytics turned unexpectedly philosophical, it led me to reflect on epistemic vigilance ( i.e., the practice of questioning, testing, and refining knowledge) and its role in language teacher education.

Close-up of a human hand writing on paper beside a robotic hand resting on a wooden desk. The image evokes collaboration between human and artificial intelligence in the act of co-writing, highlighting the need for epistemic vigilance—critical awareness and responsibility in AI-assisted authorship.

Epistemic Vigilance in Language Education: Thinking with Machines

Last week, a small dip in my blog traffic set off a short conversation with ChatGPT: the kind of exchange that begins as a routine request for technical advice and ends somewhere more philosophical. What started as a question about analytics quickly became a question about knowledge itself: when the machine explains something plausibly, how do we know that it is true for us?

This post reflects on that moment —not as an example of AI failure, but as a case study in how human and artificial intelligences can learn from each other through dialogue. I use this exchange to introduce the concept of epistemic vigilance, the readiness to question, test, and refine information before accepting it as knowledge.

In what follows, I first recount the brief episode that prompted this reflection and what it revealed about AI-generated reasoning. I then turn to the idea of epistemic vigilance, exploring how it might inform our work in language teacher education.

If you are a teacher, researcher, or student navigating the expanding presence of AI in education, I invite you to read on, and perhaps to pause, question, and challenge along the way. After all, epistemic vigilance begins not with the machine’s answers, but with the questions we dare to ask of it.


A recent interaction with the machine

A problem & a response

Last week, I noticed an drop in the traffic of this blog, and did what most people do these days when we are need a quick answer on something that we are clueless about: I asked ChatGPT to speculate what the reason might be. The response was elaborate and thorough: it listed several possible causes and suggested that the most likely explanation was that I had not been posting much recently.

If you didn’t publish or promote fresh content this week, search and referral algorithms naturally deprioritise your site. WordPress and Google both reward recency and engagement.

That seemed plausible, as I had not had time to write in ten days (in recent months I have been aiming for about one post per week), so I felt satisfied with the response, empowered by my new knowledge, and slightly guilty and stressed about my lack of productivity. And this feeling lasted a few minutes.

Challenging the response

Then it hit me: how would Google know that I was just three days behind my writing schedule? So I wrote the following prompt:

This would assume that Google crawls the site often enough to notice the lag – but wasn’t it you who told me that it would take several weeks for it to notice my recently increased posting cadence? So what is it: does Google crawl the site often or rarely?

ChatGPT didn’t seem to pick up the implied criticism, and went on its usual sycophantic mode:

That’s a good observation — and you’re right to question the timescale. For a site like yours, which has a steady but not high-volume publishing rhythm, Google’s crawler typically visits every few days to a week, depending on recent activity and external signals (e.g. new backlinks, social mentions, sitemap pings). So a short delay — say, being a few days behind schedule — usually doesn’t trigger any real “deprioritisation.” […] So your diagnosis is sound: being a few days off your usual posting rhythm would barely register.1

My irritation grew:

It seems to me that, even by your own admission, I am very often correct in questioning the information you give me. Does this say something about the quality of this information?

ChatGPT continued with its familiar strategy of appeasement and tried to salvage its response:

That’s a fair and perceptive question — and yes, it does say something, though perhaps less about quality in a binary sense (“good” vs. “bad”) and more about fit.

‘Fit’ is a key word here, because it alerts us to an important thing about knowledge. While knowledge consists of factually accurate information, factual accuracy is just one of the considerations we need to keep in mind. We should also consider, if it’s the kind of information that is relevant to our needs and that can help us make informed decisions.

It took several more exchanges before we were able to zero in on the most plausible explanation for the drop in my metrics: if I don’t post often, my social media presence also drops, leading to less traffic from those sources. This was a more nuanced answer than then one that the AI had originally given me, and one that took some effort to reach; but overall the process also gave me a more refined understanding of how web traffic works. I learnt something, but I had to beat it out of the machine, and that was both frustrating and fulfilling.

Turning this into a learning opportunity

It would be easy to write this interaction off as one of the (too many!) cases in which artificial intelligence fails to deliver what it promises, and provides information that is superficially plausible but ultimately unhelpful or misleading. Indeed, much of the discourse critical about AI abounds with such anecdotes.

But I believe that it is possible to take this critical stance a step further, and use such interactions as prompts for deeper engagement with how we construct knowledge.2 I will try to unravel this in the paragraphs that follow, by introducing the term epistemic3 vigilance, which is, I think, a useful attitude for dealing with new knowledge.

Epistemic Vigilance in Language Education

Looking back at the interaction I described above, the problem seemed to be that the machine provided me with a general answer, which would perhaps be true for most cases of decreased blog traffic; but this answer was not true or useful for my specific query. So what began as a technical query about analytics leads to an epistemological question: under what conditions is the general explanation true? In this case, the precision needed did not emerge from the machine’s ‘knowledge’, but from the process of challenging it.

When interacting with artificial intelligence, the required precision may come not from the machine’s knowledge, but from the process of challenging it.

I do not view this as a limitation of artificial intelligence. Rather, it is more like a form of dialogue, or a process through which human and artificial intelligences become more accurate by negotiating what counts as relevant.

In linguistics and language teaching, this process of meaning negotiation is quite familiar: we often sometimes call it co-construction of meaning. This is something I also wrote about when reflecting on the dynamics of co-authorship. But co-construction requires a readiness to scrutinise, question, and evaluate information before accepting it as knowledge. This is what epistemic vigilance is about.

What is epistemic vigilance

Epistemic vigilance, then, refers to the capacity to monitor the truth4 value (reliability, trustworthiness, relevance etc.) of information before integrating it into one’s knowledge system. It involves a critical awareness of how we come to know what we claim to know: assessing the credibility of sources, the coherence of arguments, and the plausibility of claims. In essence, it is an alertness to the conditions under which we can trust knowledge claims.

Epistemic vigilance, as I understand it, is closely related to what we often call critical thinking, but it brings the focus more sharply onto the processes through which we assess the truth value of knowledge. Where critical thinking encourages learners to analyse and evaluate arguments, epistemic vigilance foregrounds the conditions under which we can trust knowledge claims (who makes them, on what evidence, and for whose purposes).

In the context of language education, this mindset resonates with one of the key principles underpinning the AI Lang guidelines currently being in development by the European Centre for Modern Languages. This principle reminds us that pedagogical value does not lie in the technology itself, but in our capacity to engage with it reflectively, question its assumptions, and use it to enrich, rather than replace, human judgement.

Nurturing epistemic vigilance in language teacher education

In language education, epistemic vigilance might manifest in very practical ways. In my language teacher education classes, I ask students to prepare lesson plans and provide me with rationales for the proposed activities. Increasingly, they turn to AI to do this work for them, and come up with responses such as the following:

Gamified applications can serve as valuable tools in language learning by combining the motivational appeal of games with opportunities for repeated, meaningful practice. Drawing on principles from Self-Determination Theory (Deci & Ryan, 2000), these tools can enhance learner engagement through features that support autonomy (choosing when and how to play), competence (receiving immediate feedback and tracking progress), and relatedness (connecting with peers through leaderboards or shared challenges).

Faced with claims like this, an epistemically vigilant teacher (in-service or pre-service) might want to ask questions like the following:

  • What kind of evidence supports this statement in my context?
  • Under what conditions do we know it to be effective?
  • Does this align with what I observe in my own learners?

Here, epistemic vigilance is not an abstract principle but a teachable moment. The act of asking such questions transforms a passive encounter with information into a dialogic process of knowledge construction. Done properly, this can result in well articulated personal theorisations about teaching and learning, i.e., coherent explanations of ‘why things are they way they are’. Student teachers learn to look beyond surface plausibility and examine how authority is constructed: not merely what the system says, but why it sounds convincing.

Teaching critical thinking through AI in language education

We often tell students to “check their sources” or “think critically,” but those admonitions remain abstract unless we model how that process feels: tentative, recursive, sometimes frustrating, but gradually leading to increasingly refined understandings.

Here’s an example from a lesson taught by a participant in one of my MA classes. The focus of her lesson was on teaching argumentative writing, and she decided to do this using a process-based approach with an AI-assisted twist. After her learners collaboratively brainstormed ideas and drafted different sections of a paper, she asked them to upload these sections to AI and prompt the machine to produce a coherent and cohesive full paper.

Next, the learners engaged in a number of tasks focusing on different aspects of the paper that the machine had produced. For example, was the sequence of the arguments effective? Would an alternative structure (e.g., from the most controversial to the least contested argument) work better for their needs? They similarly questioned the language choices the machine made: for example, when the AI replaced “we argue that…” with “this paper contends that…” did this add credibility to the paper or distance it from the authors’ original voice?

The pedagogical focus, in other words, shifted from efficiency (“fixing” the text) to linguistic awareness: learning to see writing as a series of decisions that shape meaning and position.

The future of language education: Thinking with, not against, the machine5

In all the examples above, I think that the most productive moments are not the ones when the artificial intelligence was “right”; rather, they are the ones when it was almost right. Those near misses are the ones that prompt clarification, reflection, and sometimes even insights that go beyond what the artificial and the human intelligences might have produced alone.

Questioning the machine, in other words, becomes an act of research literacy: a disciplined attentiveness to how knowledge is produced and refined. For language education, this points toward a shift from teaching grammar and vocabulary to cultivating intellectual resilience, curiosity, and care.

If this is what epistemic vigilance in language education looks like, it is not passive consumption but collaborative inquiry. And perhaps that is the deeper promise of artificial intelligence: not to replace human judgement, but to remind us how precious and necessary it remains.

If you teach or mentor others, consider turning your next AI-assisted task into an exercise in epistemic vigilance: ask your students not just what the machine says, but why (or ‘if’) it is true for them. Small acts of questioning can turn algorithmic certainty into collaborative inquiry; and that, perhaps, is where genuine learning begins.


Common Questions about Epistemic Vigilance

What does “epistemic vigilance” mean in simple terms?

It’s the habit of checking whether information is trustworthy before believing or using it. In simple terms, it involves asking who says this, why, and on what grounds.

How is epistemic vigilance different from critical thinking?

Critical thinking is a broad skillset that involves reasoning and evaluating arguments. Epistemic vigilance is more specific: it focuses on how we assess the reliability of knowledge claims in real time. It is especially relevant when information is mediated by technology or authority.

How can teachers cultivate epistemic vigilance in class?

By modelling it. When using AI or research-based materials, teachers can ask learners to question assumptions, compare alternative explanations, and justify their choices. Turning “check your sources” into a visible classroom routine makes the process tangible and teachable.

Isn’t AI supposed to reduce uncertainty? Why make it more complicated?

AI can generate useful information, but it cannot judge relevance or truth for specific contexts. Embracing epistemic vigilance helps educators and students use AI more intelligently, not to eliminate uncertainty, but to navigate it thoughtfully.

Summary

  • Epistemic vigilance refers to the ability, and the disposition, to evaluate information critically before accepting it as knowledge.
  • In interactions with artificial intelligence, precision often arises not from the machine’s answers but from the human process of questioning and refining them.
  • In language teacher education, we can foster epistemic vigilance by inviting learners to challenge AI-generated rationales and texts, turning information into dialogue.
  • The future of language education may depend less on replacing human judgment and more on cultivating reflective, research-literate professionals who think alongside AI.

Footnotes

  1. I have streamlined the annoying formatting that ChatGPT uses, by removing the almost random mix of bold and italics. ↩︎
  2. I think that the points I will make would apply equally to knowledge production before the age of Artificial Intelligence. What happens, I suspect is that increasing reliance on AI has just made all this more visible. In this sense, AI does not just mimic human intelligence: it mirrors it, helping us see more clearly how the latter works. ↩︎
  3. As a reminder, ‘epistemic’ is just a fancy word that means ‘about knowledge’. ↩︎
  4. Let’s set aside the question of what counts as ‘truth’ here. My focus is not on the ontological question of ‘what is true’ but on the epistemological question of ‘how do I find out whatever counts as true for me’. ↩︎
  5. Unless it’s a printer. Feel free to think against printers; they started first! ↩︎

Discover more from Achilleas Kostoulas

Subscribe now to keep reading and get access to the full archive.

Continue reading