Across the country, teachers are anxious. "AI is changing everything," they say. "Students are turning in work that is done by a machine. We have to stop them."
It is. But are they? And do we? This is where the path gets a little quicksand-y.
I’d been thinking about this since a recent event in Oregon, where I got to meet Dr. John Spencer and hear him talk to a group of principals about AI in classrooms.
Dr. Spencer pointed out— and Google confirms— that universities and schools taking a hard line on catching "cheaters" are now facing backlash from students alleging wrongful accusation and punishment. Institutions like Adelphi University, Yale University, and the University of Minnesota are getting sued— and losing. A high school in Massachusetts was sued because it punished a student for using AI for research, but they didn’t have a clear policy on the matter. The takeaway? We can’t enforce something we haven’t clarified as wrong.
And let’s be clear: Trying to stop usage by using AI detection systems is not the solution. I repeat: Not. A. Solution. AI detection is notoriously unreliable and prone to false positives. I decided to test one myself. I took a paragraph from my doctoral dissertation—a paragraph I had obsessed over, revised with four different editors, and felt was near-perfect—and dropped it into GPTZero. The result? It was flagged as AI-generated.
I can assure you, the only non-human intelligence involved in writing that paragraph was my sweet little mutt, who loyally slept beside me through every word of that dissertation.
Schools vary wildly how they’re addressing the AI thing. A friend is working on his doctorate at a university where AI usage is celebrated and encouraged. His professors told him, "In the past, we'd ask for you to generate original ideas, tell you to get them on paper, and then send you to the Writing Center to polish them. Now, instead of the Writing Center, you will use AI. As long as the ideas are yours, AI is no different than a writing tutor."
That's a far cry from another acquaintance, a high school teacher, who says all her students’ papers are written in class so she can watch closely to make sure there is no AI used. Ever.
But AI is here, and we can’t hide from it, scare kids away from it, or eliminate it. Which is a good thing, because the goal of education shouldn’t be to create a sterile environment free of modern tools— it’s to teach students how to use them ethically, effectively, and critically.
So, what to do? Instead of going at this with a sledgehammer of suspicion, let’s consider strategy.
Prioritize Process, Not Policing. Due process is still a requirement. If a teacher suspects a student has cheated, that student deserves a chance to explain their process, thinking, and results without even a hint of accusation.
Care for the Teachers. AI usage and detection is putting pressure on already overworked teachers. They have a tough job anyway. To ask them to write policy? Or ask them to enforce a weak policy? Yeah. Not a great idea. Leaders should get ahead of teachers on this one as a way to protect them. Soon.
Start from a Place of Trust. Students often rely on tools to get an edge when they desperately want to do well but aren't confident they can. Their desire to succeed is something we should encourage. Let’s approach AI conversations with an assumption of good intent.
Go for clarity. Most AI disputes arise from the absence of clear, specific guidelines. When policies are ambiguous, students and faculty are forced to invent their own rules. Let’s be clear on what’s okay and what’s not.
Go for consistency. Okay, so we provide clarity. Then we have to be consistent, meaning we can’t have different approaches in different classrooms. A principal friend in Michigan shared a story of a student accused of cheating on a history paper for using the exact research tools he’d been taught to use in his English class. One word: Yikes. Imagine that conversation with student and parent.
Acknowledge and Address Bias. Allegations of AI misuse can unfairly target non-native English speakers, students from less-privileged backgrounds (who may not have been taught the "rules" of academic writing as explicitly), or students with learning disabilities (who might find AI to be a powerful assistive tool). For these students, it’s not about academic integrity so much as equity and accessibility. Besides, their writing styles may differ from what AI detectors (and human readers) have been trained to consider "human-written."
Rethink the Final Product. Writing isn't the only measure of learning. We still have big tools at our disposal— the trifecta of reading, speaking, and listening. For my own university class, I am significantly reducing the amount of traditional written work. I want students to do more thinking and exploring verbally. Instead of asking, "Research and write about a relevant Board Policy," I'm reframing the assignment: "Identify a specific human problem in your school and propose a solution that incorporates 3-5 relevant board policies." Their final product will be more than just a paper. We’ll have presentations, projects, portfolios, speeches, debates. I’m bringing back conferring. We’re going to talk. We’re going to learn. Rather than starting the game, AI will be on the bench, waiting to be called in.
I was discussing AI with my teenage niece the other day. She groaned, "I wish all AI would just go away." She sounded so… hopeless. I asked her to tell me more. She did. "It’s frustrating. Some kids use it. Some don’t. Some teachers let us. Some don’t. Some students use it to write an entire paper, citations and all. Other students spend hours writing without using AI, and they get lower grades because their papers aren’t perfect. Some teachers have even abandoned grading papers because they say they can’t tell anymore if it’s ‘real.’ It’s confusing and I hate it.”
It's confusing and I hate it. That sentiment is reflected by so many of the educators I speak with. My goal is to help us flip that switch.
"It's confusing and it's interesting. So what are we going to do about it?"
Let’s start there.
Let’s stay curious—
Jen
P.S. Tell a friend about this newsletter! All they have to do is subscribe here and they’ll get it delivered to their inbox every Tuesday.
P.P.S. No AI in this newsletter, either, except spell check.
