AI pragmatists: How language teachers are navigating AI with nuance

Source: The Conversation – Canada – By Martine Rhéaume, Coordinator of Technological Innovation and Artificial Intelligence in Language Education, L’Université d’Ottawa/University of Ottawa

A pervasive narrative has taken hold in education: generative AI (genAI) is an unstoppable force, and educators must adapt or be left behind.

Technology companies market AI tools as the ultimate classroom assistants, while popular media warns that essay writing is dead.

Teachers have long been labelled or framed by technology enthusiasts and policymakers as “resistant” and “risk averse.” Discourse about technology in classrooms has amplified notions that teachers either embrace or reject tech.

Yet research with educators is showing that a binary framing of AI innovators versus Luddites obscures what is actually happening in classrooms.

To better understand this, I turned to my own institution, the University of Ottawa’s Official Languages and Bilingualism Institute (OLBI). I consulted English as a second language (ESL) and French as a second language (FSL) instructors to examine their attitudes toward, and current use of, AI-assisted tools. I did this through conducting a bottom-up institutional survey.

Twenty-four of 60 eligible staff members responded, yielding a 40 per cent response rate. In the context of institutional research, this is a robust turnout that provides a representative cross-section of our department.

Because my goal was to understand the nuances of educators’ decision-making, this qualitative sample offers deep insights into front-line teaching realities. The findings point to a thoughtful majority of instructors navigating complex pedagogical terrain with considerable nuance.

The myth of the resistant teacher

As educational historian Larry Cuban has argued, teachers are not inherently resistant to technology; they’re resistant to tools that don’t solve their problems. Data from my study supports this distinction.

Research in acquiring a second language suggests experienced language educators, keen to see their students progress, seek normalization of novel technologies — the stage at which a tool becomes invisible and learning takes centre stage.

My survey confirms this orientation. When asked to identify their stance on AI integration, the majority of OLBI staff did not select “skeptic.”

The majority of respondents are best characterized as “pragmatists” — educators who recognize the potential of genAI tools but are withholding full adoption pending credible pedagogical evidence.

A significant minority, however, expressed substantive and philosophically grounded concerns. One FSL instructor described genAI as “une menace à l’autonomie de la pensée” (“a threat to the autonomy of thought”).

This is a considered defence of the critical thinking capacities that higher education exists to cultivate.




Read more:
The ‘slow professor’ could bring back creativity to our universities


The ‘hidden AI’ problem

My survey also suggests a striking inconsistency in how educators conceive of AI. Several respondents reported that they “never” use generative AI. Yet, in subsequent questions, they acknowledged regular use of tools such as Grammarly for writing assistance or DeepL for translation.

Grammarly introduced generative AI to its earlier AI technology integrating machine learning and natural language processing, and the genAI feature can be turned off. DeepL has also developed a genAI model.

However, the bigger point is instructors appeared to distinguish between AI they perceived as assisting existing work and AI they perceived as generating new text. That distinction reflects different understandings of authorship, agency and acceptable use.

What the data reveals, then, is an intuitive taxonomy: instructors are broadly comfortable with tools that refine or correct their existing work (assistive AI) and considerably more cautious about tools that produce content on their behalf.

Such a distinction is reflected in my own process with this article. As a francophone writing in English, I used Anthropic’s Claude to clarify sentence-level phrasing in a draft I had already written.

A differentiation between refining existing work and producing content reflects broader discussions taken up elsewhere related to learning and academic integrity.




Read more:
ChatGPT is in classrooms. How should educators now assess student learning?


The efficiency shield

The most significant finding from the survey concerns how instructors are deploying genAI primarily as an administrative efficiency tool, using it to generate lesson plans, draft course communications and create short texts for classroom use. Such tasks consume significant time but don’t directly mediate student learning.

One ESL instructor shared their enthusiasm about this:

“The possibilities for lesson planning and activity ideas are endless.”

Yet the same instructors who embraced AI for their own productivity expressed marked reluctance to introduce these tools into student learning.

The reasoning is grounded in cognitive science. Language acquisition depends on what psychologists Robert Bjork and Elizabeth Bjork term desirable difficulties — the effortful cognitive processing that consolidates new linguistic knowledge into long-term memory.

When a student offloads a grammatical decision to an auto-complete function, or delegates argument construction to a language model, they bypass the neural engagement that makes learning durable. This phenomenon, known as cognitive offloading, may produce a polished written product while leaving the underlying competency undeveloped.

One respondent articulated this concern:

“If [students] get away with that, then they will never learn how to write.”

Such positions align with UNESCO’s 2023 guidance on generative AI in education and research, which cautions that the pace of genAI adoption in educational settings must not outstrip our collective understanding of its cognitive and ethical implications.

Our instructors are, in effect, applying an instinctive precautionary principle — one that is well-supported by the empirical research.




Read more:
What are the key purposes of human writing? How we name AI-generated text confuses things


Policy must follow pedagogy

The OLBI consultation illustrates why meaningful AI education policy cannot be imposed from above. If universities issue broad mandates to embrace innovation without consulting those who understand the cognitive architecture of learning, they risk producing policies that are administratively tidy but practically incoherent.

Conversely, blanket prohibitions ignore the reality that students will graduate into a labour market saturated with AI tools, and must develop the critical literacy to engage with them responsibly.

The path illuminated by our “pragmatist” majority is one of critical AI literacy. Concretely, this involves three institutional commitments:

Distinguishing between functions of AI: Institutions must teach students to distinguish between AI tools according to their function rather than their underlying technology. This means considering tools that operate in an assistive capacity — correcting, refining or translating work that the student has already produced — and a generative capacity by producing content on the user’s behalf.

This said, both categories of “assistive” and “generative” AI warrant scrutiny. It’s relevant to note that some educational or accessibility rights bodies are discussing using generative AI as an assistive technology, particularly for people with disabilities.

Protecting the learning process: Assessment design should value the process of writing and argumentation — drafting, revision, reflection — rather than privileging only the final product, which a language model can readily simulate.

Repositioning the instructor: As the OECD has noted, the educator’s role is shifting from knowledge transmitter to critical evaluator and learning architect. AI tools can support this transition — but only if instructors retain the agency to define the terms of engagement.

The question facing universities is whether institutions will trust the educators who understand their students’ cognitive needs to draw the lines that matter.

The Conversation

Martine Rhéaume does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. AI pragmatists: How language teachers are navigating AI with nuance – https://theconversation.com/ai-pragmatists-how-language-teachers-are-navigating-ai-with-nuance-279041