You probably saw AI slop from a relative when they sent you an AI-generated image of puppies in a picturesque basket. Or maybe it’s a hyper-realistic — but not real — video of a person fighting off a bear. As these tools become inescapable, the rise of AI slop has followed. Much like a bad B‑movie, many of these examples feel harmless—so bad they’re good. The underlying risk isn’t just fake news or misleading media—it’s ‘wrong answers that look right,’ including fabricated citations that appear scholarly at first glance.
It’s no surprise that there are nuances and sharply divided opinions with AI. In “Looking Ahead to 2026,” Microsoft CEO Satya Nadella argues we should move beyond “slop vs sophistication” and treat AI as scaffolding for human potential, not a substitute.
Deepfakes raise the stakes: most people cannot reliably spot them
If AI slop is the problem of false text that reads true, deepfakes are the same problem in video and images: false media that appears authentic. A widely cited iProov study tested 2,000 people in the U.S. and U.K. and found that only 0.1%—just two people—correctly identified deepfake content across a set of images and videos. That’s not a minor gap; it’s a literacy cliff.
Research and reporting increasingly identify deepfake abuse as a significant safety issue, with women disproportionately targeted through harassment and non‑consensual synthetic media. Deepfakes cannot be dismissed as ‘just internet nonsense’ or ‘just content’: they affect bodies, reputations, and real‑world well‑being. Researchers found that websites hosting these deepfakes collectively receive billions of views.
Grok shows what happens when deepfake creation becomes frictionless
The recent controversy around Grok, integrated into X, illustrates how deepfake abuse becomes far more accessible when image generation and editing are only a prompt away. The U.K.’s regulator, Ofcom, launched a formal investigation into whether X complied with its Online Safety Act duties after reports that Grok was used to generate and share illegal content, including non‑consensual ‘undressed’ images and other harmful material.
The broader point isn’t that one tool ‘went bad.’ It’s that ease-of-use changes scale. When a mainstream platform makes synthetic media effortless to create, harm shifts from niche to routine—and regulators respond accordingly.
Kids watching hallucination on demand
A recent study by Kapwing found that roughly 20% Of YouTube videos qualify as “AI slop”, which targets children with endless amounts of nonsensical media. On January 3, 2026, tech commentator Jeremiah Johnson (@JeremiahDJohns) posted an X thread highlighting the surge of low‑quality AI‑generated YouTube Shorts, writing “I don’t think people realize how bad the AI slop problem is on YouTube… The video below is the fifth most‑viewed video on YouTube in the last week, with over 100 million views in six days.”
Childlike AI slop Shorts dominate a lot of people’s algorithms now.”
Where Higher Education Comes In:
Many schools now teach guidance that prompts users to verify AI outputs and watch for hallucinations. But what’s still missing on most campuses is a simple response playbook for deepfakes: what to document, where to report, and how to support someone who has been targeted.
Advocates have created online training to teach students about AI ethics and risks, focusing on deepfakes and abuse. After experiencing image abuse as a teen, Ellison Berry developed Protecting Students | Adaptive Security to raise awareness and prevention strategies. Noelle Martin, who faced similar challenges, published research highlighting how easy access to such tools disproportionately harms women, especially women of color.











