HOW WE GOT THE INTERNET ALL WRONG:
It is now hard to remember the optimism with which many people greeted the arrival of the digital world. But back in the 1990s and early 2000s, the evangelists of the internet confidently predicted that the internet would, as Thomas Friedman wrote in The Lexus and the Olive Tree, published at the cusp of the new millennium, “weave the world together.”
With the benefit of hindsight, it is easy to make fun of such predictions. But the logic for these predictions was seemingly compelling. For all of human history until recently, it had been extremely costly and cumbersome for people in different parts of the world to communicate. As late as 1930, Friedman pointed out, a three-minute phone call between London and New York cost about $300. That made it hard for people to develop a greater understanding of each other, or to recognize that they might share all kinds of interests.
By the time Friedman was writing, such a phone call was basically free. It was easy to imagine that, in a world of costless communication, most people would choose to connect with people in faraway locations who are very different from them. Society would, the hope went, grow to be far more cosmopolitan: far more interested in the well-being of people unlike ourselves, and far less likely to prioritize those who share our group identities.
The truth, as we now know, turned out to be very different. Given the opportunity to communicate with anybody they wish, most people are spending their time on social media connecting with people they already know, with those who share their identities, or with those who share the exact same political views. The greater ease of communication was supposed to help the human species transcend its traditional boundaries and expand our collective horizons; instead, it has amplified our tribal instincts and turned every aspect of our politics and culture into a fevered battle between the in-group and the out-group. Early evangelists of the internet conjured up a touching vision of universal human connection. Instead, the technology they rhapsodized has turned us into tribalist creatures, giving ever greater importance to our race, our gender, our sexual orientation, and our political convictions.
But don’t worry; things can always get worse: AI is Killing the Internet. Don’t Let It Kill the Classroom Too.
There’s a name for this phenomenon: the Dead Internet Theory, which posits that a significant amount of online content is produced not by humans but by AI. The evidence suggests a hard kernel of truth at the core of this argument. More than 40% of Facebook’s long-form posts and more than half of longer LinkedIn posts are likely generated by AI. Engagement with this content is often powered by automated click farms.
AI isn’t merely churning out fluff. In one striking example, bots fueled a disproportionate share of the online discourse following mass shootings, and AI actively spreads misinformation. Online content is increasingly spun up by algorithms for other algorithms to amplify. This deluge of automated content is drowning humanity on the internet.
Lately, it seems that a similar dynamic is charging into our college classrooms with developers of educational technology at its vanguard. Let’s call it the Dead Education Theory, and it works something like this:
A college professor uses one of many dozens of free commercial AI tools to draft a rubric and an assignment prompt for their class. A student pastes that prompt into another AI app that produces an essay that they submit as their completed assignment. Pressed for time, the professor runs the paper through an AI tool that instantly spits out tidy boilerplate feedback. Off in the background, originality checkers and paraphrasing bots duel in an endless game of evasion and detection. On paper, the learning loop is complete. The essay is written. The grade is given. And the class moves on to its next assignment.
It’s entirely likely that this scenario is playing out thousands of times every day. A 2024 global survey from the Digital Education Council found that 86% of college students use AI in their studies, with more than half (54%) deploying it at least weekly and a quarter using it daily. Faculty are increasingly using AI to create teaching materials, boost student engagement, and generate student feedback, although most report just minimal to moderate AI use.
Exit quote: “Banning AI tools isn’t realistic; the genie has escaped that bottle. But instead of allowing AI to drain higher education of its humanity, we must design a future where AI amplifies authentic human thinking. AI will be in the classroom — there’s no question about that. The urgent question is how to keep humanity there as well.”