If you work in faculty development, you have probably heard the same concern on a loop for the past year: All my students are cheating using AI. At Geogia State University, our campus teaching and learning center gets more requests for workshops on how to prevent digital dishonesty than any other topic. Throughout the fall 2025 semester, I averaged one workshop, presentation or meeting about AI and academic integrity every four workdays.
University faculty are anxious, and it shows in their reactions. We have all read the stories about professors reverting to blue books or opting for early retirement to avoid the perceived flood of machine-generated text. As we have struggled with how to promote academic honesty when AI makes dishonesty so easy, higher education has largely retreated into two defensive postures: surveillance or supplication.
The surveillance strategy relies on detection, an arms race we have already lost. AI-detection tools are biased, easily circumvented and prone to false positives. To test this, I fed the first chapter of my dissertation (written in 2006) into a popular AI detector. It flagged my work as 39 percent AI generated. We cannot police our way out of this when our radar is broken.
The alternative is what I call a strategy of supplication, essentially trying to convince students to be responsible AI users. I see universities creating syllabus statements and online modules on AI literacy, hoping that if we explain the ethics clearly enough, students will comply. But this misses the point entirely. Students generally don’t cheat because they lack moral fiber; they cheat because they are navigating a system of incentives that prioritizes efficiency over learning.
I believe too often we build courses that punish the very thing learning requires: making mistakes. When we grade on high-stakes curves, offer little feedback and demand perfection on the first try, we are signaling that the product matters more than the process. By removing the space for safe experimentation and feedback, we have made the struggle to learn a liability. In that context, students are turning to AI not to avoid learning, but to avoid the risk of failure in a system that offers them no safety net.
Last fall, I had lunch with a colleague who told me she was abandoning online teaching entirely. She’d come to love teaching online during the pandemic but felt that the pervasive use of AI had made it impossible for her to connect with students and create authentic experiences. She was especially exasperated that students were using AI to write discussion post assignments that asked for personal examples. “I ask them to share an example from their own lives, and they still give me something AI wrote,” she said, clearly frustrated.
I asked about the assignment structure. It was the standard “post a reply to this question and then comment on the posts of two peers” format. That isn’t a discussion; it is digitally talking into an empty room. In this situation, I don’t think students are cheating because they are unethical or because they don’t care about their learning. They are cheating because they are bored. They are opting out of an experience that lacks meaningful feedback, genuine collaboration or clear learning objectives.
I have concluded that the question of how to curb AI-enabled dishonesty in our classes has less to do with AI or honesty and more to do with our classes. The ease with which students can cheat using AI has exposed an uncomfortable truth: we need to do a better job teaching. We don’t need to AI-proof every single assignment or abandon teaching large online classes entirely. We do need to change the way we design and teach our classes so that the difficult work of learning, not cheating, is the more attractive option. Here are three ways I envision we can do this:
- Make discussions actual discussions. Let’s retire the “post once, reply twice” formula. It has become the busywork of the digital age. Instead, use online forums for true interaction: peer review, debating applied examples or solving problems collaboratively. If an online activity doesn’t require genuine human back-and-forth, it probably doesn’t need to happen in a discussion forum.
- Use pedagogies that motivate honesty. In a recent op-ed in The New York Times, psychologist Angela Duckworth argued that willpower is a false narrative. People who successfully eat healthfully or reduce social media use don’t do it through sheer determination; they do it by structuring their environment so that the right choice is the easy choice. We can adopt this in our teaching. By scaffolding projects, integrating process-based feedback and using mastery-based grading when possible, we make doing the work more rewarding, and easier, than trying to engineer a prompt to fake it.
- Teach small, even when the class is big. Human connection combats cheating, and positive social pressure is a strong motivator to do the right thing. This is easy in a small seminar, but what about a large lecture class? The key is to find ways to make students feel seen and heard. At Duke University, Professor Mohamed Noor flipped his large lectures, breaking the class into small groups to work on problems while he circulated. On my own campus at Georgia State University, five of my colleagues who co-teach a large-enrollment course created vertically integrated project teams. These small teams offer a way for students to apply course knowledge to solve problems they care about while developing meaningful relationships with their peers and instructors. When students feel their contributions matter, they are less likely to ask a chatbot to do their thinking for them.
When we debate whether to adopt AI tools or how to punish AI-related misconduct, I think we are dancing around the real issue. We should take this opportunity to look critically at how we teach. Changing how we’ve become accustomed to presenting content or assessing learning can seem daunting, so don’t try to do it alone. Ask a trusted colleague to observe your teaching and offer you honest feedback on where your activities or assignments don’t support your learning goals. If your campus has a teaching and learning center, schedule time to meet with a consultant. Speaking as a center director, I can promise you that if you bring us an assignment where you’re seeing a lot of AI misuse, we will have suggestions for how to improve it.
Many faculty see AI as a threat, both to student learning and to academic integrity. They worry that the classroom is becoming a battleground over AI ethics rather than a space for discovery. But the answer isn’t better surveillance. Instead, we need to focus on creating learning experiences that inspire students to want to do their own work. The best defense against an AI chatbot isn’t a detector or a syllabus statement; it is a class worth taking.
Kim Manturuk is the executive director of the Center for Excellence in Teaching, Learning and Online Education at Georgia State University.
