Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion | Health & wellbeing

    Melania Trump and A.I.-Powered Robot Arrive at White House Event

    Trump’s war in Iran exposes US’s shift from a global guardian to an arbiter of chaos | US-Israel war on Iran

    Facebook X (Twitter) Instagram
    Facebook X (Twitter) YouTube LinkedIn
    Naija Global News |
    Thursday, March 26
    • Business
    • Health
    • Politics
    • Science
    • Sports
    • Education
    • Social Issues
    • Technology
    • More
      • Crime & Justice
      • Environment
      • Entertainment
    Naija Global News |
    You are at:Home»Education»What History, Evidence, Competing Views Say About AI, Higher Ed
    Education

    What History, Evidence, Competing Views Say About AI, Higher Ed

    onlyplanz_80y6mtBy onlyplanz_80y6mtMarch 26, 20260014 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    What History, Evidence, Competing Views Say About AI, Higher Ed
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Something significant is happening in higher education. Whether it represents a genuine inflection point or another chapter in a longer story of technology and learning is a question worth taking seriously—because how we answer it will shape the choices we make. Before we can think clearly about where we’re going, it helps to understand where we’ve been.

    “A new era of education was about to begin: one where any student who wanted an education could get one quickly and inexpensively. In this era, students would not face geographic or financial barriers to quality instruction. Not only would education be delivered directly to learners but it would also be personalized, using artificial intelligence, so that students could learn more quickly and deeply than in the traditional classroom. Powered by rapidly accelerating capabilities of technology and artificial intelligence, this new era would raise the level of education across the globe to respond to the demands of a rapidly changing economy. It was an exciting time of promise and new advances. It was also 1966.”

    That passage is from Anne Trumbore’s recent book, The Teacher in the Machine: A Human History of Education Technology, and I read it aloud to open the AI & the Future of Learning Summit at the University of Michigan last week. The final line tends to land.

    Six decades ago. Not ChatGPT’s release in November 2022. Not pandemic-era online learning. Not the year of the MOOC.

    Trumbore’s point isn’t that nothing has changed or that today’s AI moment is just another cycle of hype. Her sharper argument is that there is a persistent human tendency, one that venture funding reliably amplifies, to favor what appears novel over what we already know works. Conversations about technology and education too often skip past the accumulated evidence on teaching and learning in order to chase what merely feels new. Ironically, by ignoring the lessons of earlier experiments, we can slow down meaningful innovation rather than accelerate it.

    That’s the kind of historical humility I wanted to bring into the room on March 17 in designing and opening the summit—not as a brake on ambition, but as a foundation for it. The Center for Academic Innovation at Michigan has spent over a decade at the intersection of learning science, educational technology and the practice of institutional change—which is part of why convening this particular conversation felt both natural and necessary.

    The most productive posture is to operate inside the tension.

    At the same time, humility about history shouldn’t slide into passivity about the present. And this is where I find Dario Amodei’s two essays useful as a pairing.

    In one—“Machines of Loving Grace”—Amodei describes a future where AI dramatically accelerates human progress: curing diseases, expanding scientific discovery, improving quality of life at scale. In another—“The Adolescence of Technology”—he reminds us that powerful technologies emerge in turbulent phases, where societies struggle to understand their consequences and institutions struggle to adapt.

    The temptation is to choose between these views. To be either an optimist about AI or a skeptic. But I think the more productive posture is to hold both. Not choosing between optimism and catastrophe. Not pretending they’re compatible. But operating productively inside the tension. We are living in a time where extraordinary optimism and deep anxiety coexist. And both perspectives are worth taking seriously.

    My own view is that universities are not bystanders in this story. They are among the few institutions with the independence, the expertise and the long time horizon to ask the questions that markets and governments tend to defer. That’s not a comfort—it’s a responsibility. And it’s one I don’t think higher education has yet fully accepted.

    The event brought together an unusual cross-section of people—university leaders, ed-tech founders, workforce practitioners, technology companies, faculty, staff and Michigan students. Before the first session had even begun, it was clear how rare that kind of room is. Dozens of institutions were represented and ranged from Stanford and the University of Virginia to Michigan State University and Grand Valley State—alongside OpenAI, AWS, Salesforce, Microsoft, Coursera, Podium, Emeritus, Noodle, CodeSignal, Superhuman, edX, GSV Ventures and Axim Collaborative, among many others. These conversations too often happen within sectors rather than across them—or in venues too large for the kind of exchange that actually moves ideas forward. The questions we’re facing don’t belong to any one group, and the answers won’t, either.

    We organized the day around a single set of questions: What is the campus of the future, whom will it serve and what is the role of AI? We grounded our thinking 10 years out—far enough to escape current road maps, close enough to feel the consequences of the choices being made now.

    A few ideas from the day that I think deserve wider attention.

    We need to ask what we’re optimizing for and be honest about the answer.

    Robert and Elizabeth Bjork draw a distinction that sat at the center of my opening provocation and, I’d argue, at the center of nearly every question the summit raised: Performance is what you can do right now, with all your supports in place; learning is what you can do later, when the scaffolding is gone. Those two things are frequently in opposition. Conditions that maximize current performance often actively undermine durable learning.

    In consumer products, frictionlessness is celebrated—fewer steps, faster results. But in learning, something different is at work. The Bjorks call the conditions that produce real learning “desirable difficulties”—spaced practice, retrieval, generating your own answers before being given the correct ones. Every one of these feels harder and works better.

    So when we talk about AI making education more efficient, we need to ask: Efficient toward what? If the answer is performance in the moment, AI can help enormously. If the answer is learning that lasts—understanding that holds up when the tool isn’t available—then we need to be much more careful about what we automate and change in higher education.

    Greg Weiner has argued in The Washington Post that the rise of AI actually strengthens the case for liberal education. He draws a distinction between literal education, focused on outcomes that are immediately measurable and tied to short-term economic returns, and liberal education, focused on cultivating discernment. His framing is memorable: “Measuring only the immediate obscures the danger of obsolescence.” The intellectual habits that allow us to ask good questions and evaluate answers remain remarkably durable even as the tools change rapidly around them.

    Carlo Iacono, writing in Hybrid Horizons, draws a distinction that I find useful here: Not all friction is the same. We should be working hard to eliminate bureaucratic friction—the kind that excludes students without educating them: Rigid scheduling that prevents working adults from attending. Credit-transfer barriers that make students repeat work they’ve already mastered.

    But pedagogical friction—the kind that produces learning—is something we should be protecting, not optimizing away. The struggle to hold a difficult idea. The discomfort of not knowing. The slow accumulation of understanding that can’t be downloaded or shortcut.

    What’s at risk when we optimize away that friction isn’t just learning quality in the moment. It’s the formation of capability over time.

    We talk a lot about upskilling. But there is also growing evidence of deskilling—the gradual atrophy of capabilities no longer exercised.

    And there’s a concept that goes beyond even deskilling. The more concerning risk is what researchers have begun calling never-skilling: not the loss of a capability someone once had, but the failure to develop one at all, because AI handled the effortful work from the very beginning. I’ve written before about the perils of off-loading—the risk that in delegating what feels tedious, we give away what is actually formative. Those tasks were never incidental. They were how novices became experts. Automate them away and you don’t just change the job—you interrupt the development.

    The jagged frontier demands judgment—not just fluency.

    Ethan Mollick’s concept of the jagged frontier helped set shared language for the day: AI performs remarkably well in some areas and surprisingly poorly in others, and the boundaries shift in ways that are difficult to anticipate. This makes AI genuinely harder to use well than it appears.

    We’re all familiar with range anxiety for electric vehicles—the worry that you won’t have enough charge to reach your destination, which leads some drivers to avoid using the technology even when it would serve them well. Something similar is emerging with AI. Call it verification anxiety—“AI is helpful, but I’m never sure when I need to check it.” That uncertainty pushes some people toward avoidance and others toward uncritical overreliance. Both responses are costly.

    Anthropic’s research on AI fluency offers a useful framework in response: delegation, description, discernment and diligence. The 4-D framework gives people an answer to verification anxiety and a vocabulary and practice for staying in the conversation rather than checking out or checking off. What sits at the center of the framework is human judgment, visualized as the description-discernment loop. The capacity to evaluate, question and take responsibility for what the technology produces. Building that capacity—in our students, our faculty, our institutions—is one of the most important things we can do right now.

    Whom the campus serves is a design constraint, not just an aspiration.

    For most of their history, universities have primarily served traditional-age residential students. One of the deliberate design choices of the summit was to put student voices at the center of the day—not as a token gesture, but as a calibration point for everything the morning’s panels had proposed and to set the table for the afternoon. The students elevated the conversation. They are navigating these questions in real time, without the benefit of frameworks or retrospect. Hearing firsthand how different the experience is already for a first-year student versus a graduating senior brought our current moment to life for our audience. Their presence was a reminder that the campus of the future is not an abstraction to them. And yet the campus of the future has to be for far more people than the ones in that room.

    Earlier in the day, the workforce panel reminded the room that there are roughly eight to nine times as many workers in the U.S. who may need reskilling as there are traditional college students. The campus of the future cannot be designed as though that asymmetry doesn’t exist.

    This connects directly to the equity question that runs through any serious conversation about AI and higher education. The stakes are highest for students who pursue higher education to advance economically and for workers navigating job transitions they didn’t choose. Without deliberate investment—from institutions, from technology companies, from philanthropy—broad-access institutions cannot responsibly integrate AI. Those institutions may be exactly where the impact would be greatest. That tension needs to be at the center of how we think about whom the campus of the future is for.

    Higher education has a leadership gap it can no longer afford to ignore.

    The 2026 Survey of College and University Presidents, published by Inside Higher Ed, found that just 1 percent of presidents believe higher education has been highly effective in shaping national conversations about AI policy and ethics. One percent. These are leaders who understand that AI is transforming how humans think, learn and work—and yet the vast majority don’t believe their sector is meaningfully shaping what comes next.

    A recent NBC News poll found that voters trust Democrats and Republicans almost equally on AI—19 and 20 percent respectively. On virtually every other major issue—health care, immigration, the economy—voters have strong partisan preferences and give one party or the other numbers in the 40s and 50s. On AI, neither party breaks 20, and there is no meaningful divide between them. That’s not just a data point. It’s an opening, and it’s an obligation.

    Institutions with real expertise in how humans learn, how knowledge develops and how technology shapes society have a responsibility to be in this conversation. Not as boosters for the technology and not as resisters, but as serious actors with something distinctive to contribute. Universities are, as Brandeis University president Arthur Levine put it, at their best when they have one foot in the library and one foot in the street. One foot grounded in the accumulated knowledge of centuries. One foot engaged with the evolving needs of society and in partnership with others. That balance between knowledge and relevance is exactly what this moment demands.

    The partnership question.

    From the outset, the summit was designed to widen the aperture—bringing together voices from universities, technology companies, start-ups and the learner community around a shared question rather than a shared agenda. More than 300 people were in the room, and what emerged was a genuine willingness to engage across perspectives, with both urgency and humility.

    In the days since, I’ve been encouraged by what attendees have shared. When people reflect on what stayed with them from a particular contributor, they don’t reference organizations or titles. They reference people—their ideas, their arguments, their insights. That shift signals something important.

    Too often, we assume that people in industry and people in higher education are motivated by fundamentally different priorities. Having spent my career working across both worlds—and having had the privilege of advising universities and technology companies alike—I’ve come to believe this assumption breaks down quickly when you bring thoughtful people together around meaningful questions.

    The organizations they represent may operate on different timelines and incentives—that’s real. But the people themselves often share a surprisingly compatible vision for what learning should accomplish and whom it should serve.

    But I want to be careful here. Collaboration is not inherently valuable. Not all partnerships are built for impact. What matters is not simply that we work together, but how—with shared purpose, mutual respect and a willingness to engage ideas on their merits rather than defend institutional positions. The campus of the future will not be built by any one sector alone. It will be shaped through partnerships that are honest about what each party brings, what each needs and what they are jointly responsible for.

    Where we go from here.

    The goal of the summit was not to leave with definitive answers. The questions in front of us are too large for that, and I’d be suspicious of anyone who claimed otherwise.

    But I do think we were able to develop a sharper set of questions to carry forward—ones worth taking seriously in the weeks and months ahead:

    • Are we designing learning experiences that build durable capability, or ones that optimize for performance in the moment? And do we even have clear enough evidence to know the difference?
    • How do we build AI fluency that keeps human judgment at the center—giving people a vocabulary and practice for staying in the conversation rather than outsourcing their thinking? How do we ensure that AI fluency remains contextually connected to the disciplines?
    • What are we doing in our curricula, our learning environments and our institutional cultures to build the kind of thinkers who can use these tools well—and to protect the conditions that make that development possible?
    • When we say we are committed to lifelong learning, what would actually have to change—structurally, financially, culturally—for that commitment to be real rather than aspirational? How might AI help us develop lifelong assessment and lifelong coaching as part of a more expansive lifelong learning model?
    • How do we ensure that AI adoption in higher education closes existing divides rather than widens them, particularly for broad-access institutions and the learners they serve?
    • What would it mean for universities to go beyond adapting to actually lead in this moment? And what’s the honest case for university leadership—what can institutions that choose to actually accomplish that fast followers can’t?
    • What does a genuinely impactful university–industry partnership look like in an AI era—and how do we build ones defined by shared purpose rather than transactional convenience?

    These aren’t rhetorical questions. They’re the ones I think the field needs to work on, together and with more urgency than institutions typically allow themselves.

    Trumbore’s history is a reminder that enthusiasm alone has never been enough and that we have been here before in some form. Amodei’s two essays are a reminder that the stakes are real in both directions—and that operating responsibly inside that tension is not a compromise. It’s the work. So is protecting the conditions that make real learning possible. So is designing for learners who have been left out. So is building the partnerships that no single sector can manage alone. Ultimately, the campus of the future will not be built by the technology. It will be built by the choices institutions make about how to use it—and whether they have the courage to lead.

    James DeVaney is associate vice provost for academic innovation and the founding executive director of the Center for Academic Innovation at the University of Michigan.

    competing evidence Higher History views
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleEU healthcare workers say they ‘refuse to be instruments’ in deportation plans | European Union
    Next Article Trump’s war in Iran exposes US’s shift from a global guardian to an arbiter of chaos | US-Israel war on Iran
    onlyplanz_80y6mt
    • Website

    Related Posts

    Studies Flag Inequities in Grant Cuts, Threats to Pipeline

    March 26, 2026

    CUNY Brings Career Readiness to the Classroom

    March 26, 2026

    ED Removes NACIQI Member Who Voted Against Chair

    March 25, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    At Chile’s Vera Rubin Observatory, Earth’s Largest Camera Surveys the Sky

    By onlyplanz_80y6mtJune 19, 2025

    SpaceX Starship Explodes Before Test Fire

    By onlyplanz_80y6mtJune 19, 2025

    How the L.A. Port got hit by Trump’s Tariffs

    By onlyplanz_80y6mtJune 19, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Watch Lady Gaga’s Perform ‘Vanish Into You’ on ‘Colbert’

    September 9, 20251 Views

    Advertisers flock to Fox seeking an ‘audience of one’ — Donald Trump

    July 13, 20251 Views

    A Setback for Maine’s Free Community College Program

    June 19, 20251 Views
    Our Picks

    Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion | Health & wellbeing

    Melania Trump and A.I.-Powered Robot Arrive at White House Event

    Trump’s war in Iran exposes US’s shift from a global guardian to an arbiter of chaos | US-Israel war on Iran

    Recent Posts
    • Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion | Health & wellbeing
    • Melania Trump and A.I.-Powered Robot Arrive at White House Event
    • Trump’s war in Iran exposes US’s shift from a global guardian to an arbiter of chaos | US-Israel war on Iran
    • What History, Evidence, Competing Views Say About AI, Higher Ed
    • EU healthcare workers say they ‘refuse to be instruments’ in deportation plans | European Union
    © 2026 naijaglobalnews. Designed by Pro.
    • About Us
    • Disclaimer
    • Get In Touch
    • Privacy Policy
    • Terms and Conditions

    Type above and press Enter to search. Press Esc to cancel.