Right now, most institutions are still thrashing. Multiple initiatives, no coherent strategy, departments rolling out AI independently. The messaging is scattered. Some leaders are excited. Some are defensive. Some haven’t said anything yet. It’s chaos, but it’s honest chaos. Nobody’s pretending to know what they’re doing.

But this is changing. The institutions that are getting their act together – that have appointed AI committees and drafted governance policies and organized training modules – are moving toward coordination. Toward a message.

All tech change looks like this: An all-hands meeting. Someone from leadership stands up and says the new tech is an incredible opportunity. There’s a slide deck. There’s a governance task force. There’s a volunteer champion network. There are training modules with completion deadlines.

Everyone nods. Someone takes notes.

But if you walk to the parking lot afterward, it quickly stops looking like consensus.

This is where institutions are heading. Every college. Every large org that’s moved past the thrashing phase. The message gets smooth. The discourse goes underground.

It’s at the water dispensers. Breakrooms. Anywhere people can speak without being recorded. That’s where the actual conversation is happening. And it’s chunky.

There’s a faculty member who built a whole teaching practice around a skill set. A chatbot generating what she spent years mastering. She’s been told it’s an opportunity. What she’s hearing is: your market value just dropped.

And a student navigating inconsistent implementation – some faculty ban AI, some require it, some don’t mention it at all. Which classes will prepare you and which ones are experimenting on you?

What nobody says in the meeting: I don’t want this. I didn’t ask for this. I’m terrified.

And if you gaze for long into an abyss, the abyss gazes also into you.

What gets said in the lunchroom: “Did you see that? They’re just expecting us to figure it out on our own.”

I’m not the only terrified person here. And sure, I’ve been ahead of the curve on AI adoption. I’ve been in enough rooms. This is what it sounds like, and it ain’t Luddism. It’s normal people watching the ground shift under the skills they spent years earning. Watching something approximate a good enough version of their life’s work in minutes. That’s rational. That’s the right response to real uncertainty.

The tools aren’t the problem. The oozing culture around it is.

Institutional AI rollout is being managed as a change management problem. The framework is adoption metrics: training completion rates and governance compliance. The success measure is uptake.

When your metric is adoption, you need people to perform it. Belief optional. The real measure should be derived value, not raw use. Until institutions measure whether AI actually helps or not, the incentive structure won’t change.

Leaders are performing too. One level up. Most of them aren’t certain either. I’ve sat in rooms where leaders are sharing AI strategy and I can see the uncertainty in the pause before they say “transformational.”

The zeitgeist says: “AI is transformational. AI is an opportunity. Get the bloody hell on board.”

Anyone who says anything else sounds like they’re not keeping pace. Sounds like they’re afraid of change.

So you perform enthusiasm. Everyone does. You go to the meeting and you nod and you say it sounds interesting and you come home and you wonder if you’re being left behind or if this whole thing is a waste of time.

The official channel has the performance. The actual experience? Locked in the side quest chats. In the hallway conversations with people you trust. This is the liminal workplace. The gap between what’s announced and what’s actually true.

You can’t close a trust gap with better slide decks.

When skepticism becomes unspeakable, you lose the people who actually catch problems. The people who’ve spent years in the trenches. The faculty who’ve taught for twenty years. The admin who knows where the system actually breaks. The staff person who knows what actually matters to people. You silence them and call it innovation. Good job.

And you know what’s worse? They know you’re doing it. They can feel the culture saying: your experience doesn’t matter here. Your questions aren’t welcome. Your doubt is a failure of vision. So they comply. They nod. They learn the tool. And they stay quiet.

That’s not adoption. That’s bullshit. It’s people performing competence while watching something mimic their competence. It’s fragile as hell. The moment the tool fails, the hype cycle turns, or someone finally asks “are we actually better off?” The whole thing collapses because there was never real belief anyway.

The institution looks like it’s winning. Adoption rates up. Training complete. Governance in place. But it’s building something uglier underneath: a credibility gap that gets wider every time someone shuts up instead of standing up. That’s not a slow-motion problem. That’s credibility debt with interest.

Good looks like a meeting where ‘we don’t know yet’ is a legitimate outcome. Not a failure of vision, not a gap to close, but a real decision that emerges from actual conversation. Where the doubt in the room is the material, not the problem to manage.

The only way this changes is if someone with power stops performing. If a leader boldly asks why the hell they’re doing this. Out loud in a meeting and actually mean it.

Not as strategy. Not theater. Real.

Stop.

Stop performing. That’s what closes the gap. Not better slide decks. Not listening sessions. Someone actually stopping.

Find one trusted peer and name it.

If you’re a leader, it has to start with you.

Are we brave enough to admit we don’t know what the hell we’re doing?