While generative AI produces written content in response to human prompts, AI agents can complete tasks independently or without much human supervision.
Photo illustration by Justin Morrison/Inside Higher Ed | lolostock/iStock/Getty Images | Creatas/Getty Images | Rawpixel
Weeks after an outside agentic artificial intelligence tool called Einstein caused an uproar over its ability to complete entire courses in the learning management system Canvas, Canvas has unveiled an agentic AI tool of its own. But instead of helping students cheat—or automating instruction—its creators say it’s designed to enhance teaching and learning.
Earlier this month, Instructure, the company that owns Canvas—which is used by more than 40 percent of higher education institutions across North America—announced the launch of its IgniteAI Agent. The new technology, which can automate “low-value” tasks for faculty such as rubric generation, content alignment and discussion reviews, “frees educators to focus more on mentoring, feedback and meaningful learning experiences,” Instructure said in a news release. Powered by Amazon Web Services, the IgniteAI Agent will be free for the U.S. Canvas customers through June 30; after that it will be available for purchase as part of Canvas’s premium offerings.
Its rollout comes amid growing buzz about the power of agentic AI to automate workflows across industries—and fears that it could move higher education closer to fulfilling the “dead classroom” theory, a scenario in which computers teach and grade other computers.
As faculty use of generative AI continues to increase, education technology experts predict agentic AI will shape higher education’s ever-evolving relationship with AI this year. Since 2025, ServiceNow, Google, Writer, Amazon Web Services and Microsoft have all released prebuilt agents that customers can deploy to their organizations.
Canvas—which first announced its plans to integrate AI features, including agents, into the LMS last summer—also saw potential for how instructors could use agentic AI in the classroom, according to Zach Pendleton, chief architect at Instructure.
“As we think about how AI can be applied to problems in education, there’s an opportunity to do what we’re doing now in a little better way: point-and-click AI or add a button that shrinks a five-step process down to a one-step process. Those are useful because they provide a safe place for teachers to use AI and begin to understand … the promise of AI,” Pendleton told Inside Higher Ed.
“But the technological ball is not staying there,” he added. “It’s [moving toward] looking at reimagining the way we do things today with the hope of getting even better results regarding student outcomes and time-saving.”
Specter of the ‘Dead Classroom’
While generative AI produces written content in response to human prompts, AI agents can complete tasks independently or without much human supervision. And if generative AI stoked faculty fears about widespread cheating, the rise of agentic AI has only accelerated those concerns.
Last month, a young tech entrepreneur launched Einstein—marketing it as an AI agent that could complete courses in Canvas—in a reported effort to spark conversation about how the advancement of agentic AI has made cheating easier for students, more widespread and harder to detect. Within days, Instructure and CMG Worldwide, which manages the licensing rights for the Einstein name, issued cease-and-desist orders and the product went dark.
While Einstein is no longer, students still may be able to use other agentic AI tools to automate their coursework within Canvas or another LMS. And Canvas’s internal AI agent isn’t equipped to stop it. “Asking AI if something came from AI is a recipe for disaster and disappointment,” Pendleton said. “Large language models aren’t especially great at being able to reason about the performance or presence of other LLMs.”
Although Canvas’s AI agent can create customized assignments and generate personalized feedback, he characterized the idea of AI agents grading the work of other AI agents as “dystopian” and something Instructure wants to avoid.
In an effort to keep humans in the loop, it purposefully built guardrails into Canvas designed to prevent instructors from fully automating grading.
“If faculty use a feature like AI grading to remove themselves from the responsibility of providing feedback and having conversations with students, they’re teaching students that they should just go directly to the AI instead. That short-circuits human connection,” Pendleton said. Instead, if faculty are transparent about using a grading assistant to provide faster, more robust feedback than they’d otherwise be able to, they’ve “provided additional teaching and learning, encouraged students to reach out when [faculty] are available, and saved some time to answer more deeply when they do provide feedback.”
But some education experts worry that integrating agentic AI into the classroom as a time-saving measure will give institutions leverage to increase class sizes and faculty workloads.
“Eventually the question may become ‘If we have so many faculty just using agentic AI, what is their value and purpose?’” said Jason Gulya, a professor of English and media communications at Berkeley College whose research focuses on the role of AI in higher education.
Although he’s just getting acquainted with Canvas’ new AI agent, Gulya sees both potential benefits and downsides for faculty who use it.
“Part of me thinks an AI could be extremely helpful for course design,” he said. “For example, the challenge with something like AI-generated rubrics is that you have to do so much work to give it context about the course, whereas an AI agent can navigate the course.”
While that may make a professor’s life easier, it may weaken their connection with students.
“If a student knows that a message or rubric was created by AI, we need to think about what it does to the relationship between the student and the professor,” he said. “We’re going to ask students to do something difficult, and if we use this technology in a way that distances the educator and student, they’re not going to do that.”
And if students and instructors begin offloading too much of their work to AI agents, Guyla said it could eventually result in a classroom mostly void of human interaction and engagement.
“That’s absolutely possible if we’re not careful,” he said. “Ed tech is often pushing us toward that dead classroom theory. There’s a chance to rethink it, but it’s going to be on higher ed to do the heavy lifting.”
