Bob had been optimizing prompts for six months. Three thousand words of custom instructions. The AI kept missing anyway.
Turned out he was optimizing everything except what mattered.
Monday morning, break room. Bob microwaved Friday’s coffee. Dave leaned against the counter explaining his “system” to someone from accounting. “See, you have to be really specific with your prompts. I’ve got mine down to a science. Thirty-seven variables, all optimized.”
Bob grabbed his reheated cup.
Monday afternoon, Bob needed to prep for tomorrow’s meeting. Larry thought remote work meant slacking. Bob had found an article that said otherwise.
He opened Copilot and tried the obvious thing.
Summarize this article about remote work.
The AI gave him a summary. Five paragraphs, grammatically perfect, useless.
Bob tried again, more specific.
Summarize this article about remote work policy.
Focus on productivity arguments.
Better. Still missing what he actually needed.
He stared at the screen. Larry didn’t care about productivity in the abstract. Larry cared about whether he could see people working. Bob wasn’t solving for the right thing.
He deleted it and started over.
Larry thinks butts-in-seats = productivity. I'm presenting
an article that challenges this. Extract arguments that'll
land with someone who measures work by visibility, flag gaps
so I'm not blindsided, give me three objections he'll raise
so I'm ready.
The chatbot paused. Then asked a follow-up question about Larry’s management style.
Bob blinked. That’s new.
Tuesday morning, Bob made a fresh pot. Break room coffee, but at least it was hot.
He presented to Larry. Larry sat with his arms crossed, skeptical from the start.
Bob had tried the old way last month. Asked the AI for “presentation ideas” then “presentation outline with data.” Felt canned. Larry saw through it.
This time he’d been specific about what he was protecting.
I'm presenting to Larry who equates office = culture.
I need to reframe async work as culture-building, not
culture-killing. Generate three opening scenarios that
show synchronous meetings are performative, not productive.
What's my weakest argument?
The thing challenged his framing. Suggested a better opening. Bob used it.
Larry’s arms uncrossed halfway through. By the end, he approved a remote work pilot. “Let’s try it. But we’re watching productivity metrics.”
Bob nodded. Didn’t mention he’d already prepped responses for that exact concern.
Two months later, on a Wednesday. The remote work policy was live. New hires were lost by week two. Bob got pulled into the fix.
His French press steamed lazily on the desk. He opened the chat. Stopped himself before typing “How do we improve onboarding?”
That wasn’t the question.
New hires say they're lost by week 2. I think it's info
overload but could be isolation. Map three diagnostic
lenses -- information architecture, social connection,
autonomy -- show me what each reveals, help me figure out
which actually matters before I propose another process
band-aid.
The AI mapped it. Turned out it wasn’t the documentation. Nobody told new hires who to ask for help. Social connection, not information architecture.
Bob had the answer in minutes. Wrote it up: assign a buddy, check in after week one. It was almost embarrassingly simple once he’d asked the right question.
Sarah stopped by his desk, sniffed. “That smells way better than break room sludge. You’re on a streak. What changed?”
Bob tried to explain. Words felt slippery. “I stopped asking Copilot to write stuff. Started asking it to help me think.”
Sarah’s eyebrows went up. “Huh. I’m gonna try that.”
Thursday, Larry came back from a conference excited about Microsoft Loop. “Saw a demo. We should adopt it.”
Bob’s team already drowned in Teams, SharePoint, Planner, OneNote, and that weird wiki nobody admitted to maintaining. Loop would make eight. Bob would end up managing the rollout.
Bob could’ve asked “Should we adopt Loop?” Knew better now.
Larry wants Loop because a competitor uses it. I'm worried
we're adding tool #8 to a stack nobody's mastered. Model
three scenarios: smooth adoption, passive resistance, hidden
costs nobody's talking about. What questions should I be
asking before this becomes my problem to manage?
The chatbot flagged the real questions. Who trains people? Who maintains it? Who owns adoption when the initial excitement fades?
Bob wrote them down. Brought them to the meeting with Larry.
Larry paused. “Good questions. Let me think about that.”
After the meeting, Sarah appeared at his desk. “Got extra?” She nodded at his French press.
Bob poured her a cup.
Sarah took a sip, messaged him ten minutes later. “That thing you said? About asking Copilot to help you think? It worked. Actually helpful instead of just… generating.”
Bob typed back. “Yeah. Weird, right?”
Friday, Larry approved Loop anyway. He’d already decided before the meeting. Bob had to announce it.
His Americano from the good cafe around the corner was piping hot. Three power users had mastered Teams workflows. They’d hate this. Fifteen people were lost in the current system. They’d finally get help.
Bob stared at the screen. Didn’t type “Write announcement email.”
We're moving project updates to Loop. This will frustrate
power users who've mastered Teams workflows but help the
people who are lost. Frame it so power users see the upside,
anticipate their 'this is busywork' objection, acknowledge
the tradeoff without apologizing. I need language that
doesn't sound like corporate gaslighting.
The AI drafted something honest. Bob read it twice. Made some tweaks. Sent it.
Power users grumbled. Didn’t revolt. One even replied “finally.”
Dave found Bob at his desk late afternoon. Dave’s announcement. Different initiative, same day. Had caused an actual shitstorm.
“How’d you do that?” Dave’s face was tight. “Mine went sideways.”
Bob leaned back. “Stopped optimizing prompts. Started optimizing questions.”
Dave stared. “Huh.”
Bob stopped asking Copilot to execute tasks that week. Started asking it to help him think. Not “write this” but “here’s what I’m trying to protect – what am I missing?”
That’s intent engineering.
Dave’s still optimizing variables.