You’re forty-five minutes into a chat with your AI. You started with something simple: draft an email, outline a lesson plan, whatever. Now you’re on iteration twenty-three, and each version is slightly different but not actually better. You keep thinking “just one more tweak.” You’re polishing a turd, hoping it’ll transform into a rainbow unicorn horn.
You know it’s not working. But you can’t stop. Because stopping means admitting you’ve wasted forty-five minutes on this.
This is the grimoire trap. And you’re not alone.
The Diagnosis
Most people are stuck at Level 1. They think AI mastery means having better prompts. They’re wrong.
They collect templates from intro sessions. They swap recipes in Teams channels and Reddit threads. They build folders of “proven prompts” like they’re trading cards. They search “best ChatGPT prompts” and copy-paste their way through the workday.
But collecting spells isn’t the same as learning wizardry.
This is how we teach AI literacy right now. I run these sessions. I’ve done it dozens of times. Every intro session follows the same script: “Here’s a prompt for email writing. Here’s one for research. Here’s one for brainstorming.” People leave with a handout full of examples, feeling armed.
It’s not entirely your fault. But it’s definitely your problem.
The problem? You’re a medieval monk copying manuscripts. Except monks knew they were monks. You think you’re doing magic.
No one told you spell collection is the beginning, not the end.
The Path to Mastery
AI mastery is a journey through three levels.
Level 1: Incantations (The “What”)
Basic prompts everyone learns first. Formatted emails, research techniques, reverse prompting for critical thinking. Necessary table-stakes. Start here.
Level 2: Alchemy (The “How”)
Learning to mix context, constraints, and personas. Building prompts from logic instead of copying from libraries. Understanding why certain structures work.
Level 3: Wisdom (The “When”)
Meta-questions about AI literacy. When to stop. When to start fresh. When to go manual. This is where mastery lives.
Let’s move through the foundations, then get to what actually matters.
Level 1: Incantations
You have a folder of “good prompts.” Maybe it’s in OneNote. Maybe it’s a Word doc titled “AI Prompts - DO NOT DELETE.” You copy-paste from Reddit threads. You swap prompting recipes with colleagues.
This feels helpful. It gives you starting points. It reduces the anxiety of the blank chat box. Sometimes it even produces decent results.
But you’re not learning the logic behind the prompts. You don’t know WHY they work and, more importantly, why they don’t. You can’t adapt when the context changes. You’re dependent on the spell book.
If you’re just copy-pasting, you aren’t a wizard. You’re a scribe. You can’t stay here and expect mastery.
Level 2: Alchemy
At Level 2, you stop asking “What’s the prompt for this?” and start asking “How do I build the right prompt for this specific situation?”
You’re mixing your own context. Adding constraints deliberately. Experimenting with personas. Building prompts from logic, not memory. You understand WHY “act as a skeptical reviewer” works better than “give me feedback” for certain tasks.
You try different approaches. You iterate with intention, not desperation. You can explain what you’re doing and why.
This is real progress. You feel competent.
But competence without judgment is just making mistakes at scale. You can build perfect prompts all day long. But if you don’t know which prompts to build, or when to stop entirely, you’re not a wizard. You’re just busy.
Level 3: Wisdom
You have instincts. You ignore them because the AI sounds confident and you’re not sure. Stop that.
Remember Mickey Mouse in Fantasia? He knows how to make a broom dance. He doesn’t know how to stop it. That’s where Level 2 ends. You’ve learned to cast spells. You haven’t learned when to put the damn wand down.
The Wizard’s Audit
Every time you’re deep in an AI interaction, pause and ask three questions:
Is this true?
Not “is this well-written?” but “is this accurate, honest, grounded?” Can you verify the facts? Does this match reality, or is it plausible-sounding bullshit? Would you stake your reputation on this?
Is this me?
Does this sound like you, or like an AI trying to sound like you? Are you still in the driver’s seat? Could you defend every choice in this output? If you removed your name from it, would people still recognize your thinking?
Is this enough?
Have you crossed the line from useful to excessive? Are you polishing because it needs it, or because you can’t stop? Is this actually better than three iterations ago, or just different? Would starting fresh give you a better result in less time?
This isn’t a one-time checklist. It’s a habit. A pause point. Training wheels for wisdom.
Let’s dig into what each question actually means and why you need them.
Is This True?
AI is a confident liar. It has the unearned certainty of a mediocre white guy at a conference panel.
You’re using AI to research a policy question. It gives you three studies supporting your position. Perfect. Except when you try to find them, two don’t exist and one says the opposite of what the AI claimed.
You ask for lesson plan ideas about the Halifax Explosion. It gives you a beautiful breakdown of how the explosion happened in 1918. Except it was 1917. Close enough to sound right. Wrong enough to fail a grade 8 history quiz.
You almost used it anyway. You know you did.
If it matters, verify it. If you can’t verify it, don’t use it.
Is This Me?
A teacher uses AI to draft an email to parents about a student struggling with behavior. The AI writes something professional, empathetic, well-structured. She sends it. A week later, she realizes: she outsourced the relationship to a robot. The email was fine, but it wasn’t hers. The parents could tell. Not because it was bad. Because it was smooth. And she’s not smooth. She’s a teacher who gives a shit, and that shows up messy.
A student uses AI to write an essay introduction. The AI makes it more formal, more academic, more… impressive. He realizes: this isn’t how he thinks. He can’t defend these choices. If someone asks him about it in class, he’ll stumble. That’s the tell.
If you removed your name from the output, would people still recognize your thinking? Could you defend every choice? Are you still in the driver’s seat, or has the AI taken the wheel?
Sometimes the output is technically correct but the vibe is off. You know AI slop when you see it: overly formal, suspiciously comprehensive, and bloated with enough adverbs and adjectives to summon the ire of Hemingway’s ghost.
If it feels like generic AI, no amount of prompting will fix it. Change your approach or go manual.
Is This Enough?
This is the hardest question because it requires you to fight your own instincts.
You’ve been taught that more iteration equals better results. That persistence pays off. That if you just keep refining, you’ll get there. Sometimes that’s true and it works out.
But sometimes the problem isn’t with iteration.
You’re fifteen iterations into a quiz question. Each version is slightly different but not actually better. You keep thinking “just one more tweak.” The problem isn’t the phrasing. The problem is the question itself. It’s testing the wrong thing. No amount of iteration will fix a fundamentally broken premise.
You’re revising an essay introduction for the eighth time. The AI keeps making it wordier, more formal, more polished. But it’s losing your voice. You’re not making it better. You’re making it more like every other AI-generated introduction on the internet. Congratulations. You’ve automated mediocrity.
Start fresh when the base material is wrong. Go manual when the AI is leading you by the nose. Get a human in the loop when you’re outsourcing judgment, not just work. The “New Chat” button isn’t failure. Recognize you’ve learned what you needed and it’s time to try with better ingredients.
A faculty member spends twenty iterations trying to get AI to generate quiz questions. They’re still not right. She starts fresh with clearer constraints. Done in three. Bam.
A student polishes an essay introduction endlessly. The AI keeps making it worse. He realizes: write it yourself. Use the AI for feedback instead.
Sometimes pen and paper is faster. Sometimes your brain is clearer. Sometimes the tool is getting in the way of the work.
You’re forty-five minutes into a chat, iteration twenty-three, and suddenly you pause. You run the audit. Is this true? Is this me? Is this enough?
The answer is no. Time to stop.
That’s wizardry. You knew it fifteen minutes ago.
Close the chat. You know enough.