Businesses across industries have been betting big on AI in the workplace—but the results have been, frankly, underwhelming. According to a new MIT Media Lab report, a staggering 95 percent of organizations have seen no measurable return on their investments in generative AI tools.
MIT researchers cite several reasons for this adoption/ROI gap. Chief among them: AI doesn’t slot neatly into many workplace workflows, and most models still lack the contextual awareness needed to adapt to industry-specific tasks.
But a separate team at BetterUp Labs argues there’s another culprit: AI workslop. Writing in Harvard Business Review, the researchers define workslop as "AI-generated content that looks polished but doesn’t actually move work forward." In practice, it means employees end up spending more time fixing, rewriting, or clarifying AI’s "help" than if they’d done the job themselves.
The downstream effect is costly. The report estimates that workers spend nearly two hours a day (1 hour, 56 minutes, on average) dealing with workslop — decoding half-baked ideas, correcting missing details, and reworking content that isn’t actually useful. Worse, the burden doesn’t just stay with the person who generated the workslop: managers and peers get dragged in, creating a ripple effect across teams.
Mashable Light Speed
Researchers tie this to cognitive offloading — using external tools to reduce mental effort. But with workslop, that burden isn’t offloaded to a machine; it’s offloaded onto a coworker. The phenomenon is most common among peers (40 percent of cases, according to BetterUp), though managers aren’t immune: higher-ups throw down workslop to their teams about 16 percent of the time.
Overall, BetterUp Labs estimates that companies with 10,000+ employees could be bleeding as much as $9 million a year in lost productivity thanks to the sheer volume of workslop — roughly 40% of all AI-generated output in the workplace, according to the report. Beyond the financial hit, there’s also a cultural cost. Employees in the study reported feeling annoyed, confused, and even offended when handed workslop, eroding trust and reliability among coworkers.
While one solution to bad AI output may be to stop using these tools completely, BetterUp Labs argues the smarter path is to set clear organizational guidelines for how employees should (and shouldn’t) use AI. That means defining best-case scenarios where AI adds value, and drawing firm boundaries where it doesn’t align with company strategy or values.
The researchers also suggest a mindset shift: AI should be treated as a collaborator, not a crutch. In other words, it’s a tool to support good work — not a shortcut to avoid doing the work in the first place.