When Workslop Comes for You

Back in Grade 12, I was late handing in a book report. My grand strategy? Write the longest damn report anyone had ever seen. Forty-five pages of filler. I figured if I buried the teacher in words, maybe the lateness would sting less. I got a decent grade, minus the penalty. But across the cover page, in red ink, was this gem: “Your act of atonement should not be in forcing me to read a 45-page book report.”

“That’s workslop. Looks polished. Says nothing. Wastes everyone’s time.”


Naming the villain

There’s a new label floating around: workslop. BetterUp, a coaching vendor, coined it in a study with Stanford (Harvard Business Review). Their definition: AI-generated content that looks good but lacks substance. They also happen to sell the cure. Useful idea, but let’s not ignore the sales pitch.

Still, the signal’s there. Axios reported on the survey of 1,150 workers: people spend more time reworking AI drafts than creating from scratch, and colleagues trust you less if they catch you leaning on slop (Axios).


The mirror

OpenAI published a paper this month on why LLMs hallucinate (OpenAI). Turns out they’re trained to prefer sounding confident over admitting ignorance. “I don’t know” is penalized as much as being wrong. So models learn to bluff.

“We taught the machines our worst habit, and they’re giving it back with interest.”

Sound familiar? We reward humans the same way. In meetings, in reports, in emails: polish beats candor. We taught the machines our worst habit, and they’re giving it back with interest.


The stakes

This isn’t abstract. In higher ed, workslop burns time, erodes trust, and undermines credibility. A KPMG Canada survey found 67% of students say they’re not learning or retaining as much when they use AI. Faculty report that monitoring AI use adds workload (EdTech Magazine). Administrators are bullish, but only half of faculty are on board (Chronicle survey).

Add the Canadian context: we rank near the bottom in AI literacy (KPMG global report). Trust in AI is already fragile here. That makes sloppy adoption riskier.

“People are running with scissors. Without the safety cap.”

I’m not being doom-y here. These tools are game changers and I genuinely embrace them. But it’s early days and people are running with scissors. Without the safety cap.


Spotting it

You’ll know you’re knee-deep in workslop when:

  • Editing takes longer than writing.
  • You reread a sentence because it sounded nice but said nothing.
  • You roll your eyes at the chatbot.
  • You quietly stop trusting a tool that promised to save you time.


Fighting back

Here’s the toolkit I actually use:

Rubric first. Use this to measure a draft before you sink time into it.

You are my writing editor. 
Create a 5-point rubric to judge accuracy, clarity, depth, alignment, and uncertainty.
Score this draft against the rubric.
Highlight areas that scored below 3.
Suggest one fix per weak area.
Rescore after fixes.

Reverse prompting. Forces the AI to ask you clarifying questions first, so you’re driving context.

I want you to interview me, asking one question at a time. 
Each answer I provide should inform the next question you ask. 
The intent of you interviewing me is [INSERT GOAL].

Critic prompts. Get the AI to poke holes in its own work so you don’t have to later.

Critique the weaknesses in this draft.
Generate a counterargument to the main thesis.
Compare thesis vs. counterargument.
Highlight blind spots or missing evidence.
Suggest revisions to strengthen balance.

Uncertainty welcome. Encourage the model to say “I don’t know” instead of bluffing.

If you are less than 80% confident, reply “I don’t know.”
Flag statements with confidence levels (high/medium/low).
Request clarification instead of guessing.
Explain why you are unsure when you say “I don’t know.”
Offer options for what info would improve confidence.

“I’d rather see ‘I don’t know’ than spend half an hour cleaning up a load of BS.”

This isn’t anti-AI. It’s pro-value. AI is great at summarizing, structuring, and poking holes. Let it do that. But you gotta drive the bus.


NSCC lens

Where does this land at NSCC? It could show up anywhere folks are time-constrained and tempted to take shortcuts. Students, faculty, staff. All of us. Slow the firetruck down. The risk isn’t that AI replaces us. The risk is that workslop creeps in, wastes cycles, and weakens trust. That’s avoidable if we start small, measure honestly, and scaffold with real guardrails.


Closing

“Perfect is the enemy of done. Slop is the enemy of value.”

AI won’t take our jobs tomorrow. But workslop could grind down our trust today.

In bridge, length over strength might win the hand. In communication, it’s just slop.

I learned this back in highschool: length isn’t value. I’d love to hear your own stories about AI slop: when you’ve read it, when you’ve written it, when you’ve caught it. Because workslop isn’t new. AI just put it in sharper relief.