糖心传媒

Why critical thinking is essential in an LLM world

Date:

Share post:

鈥淚鈥檝e presented my argument to my LLM. Have you?鈥 鈥淥f course. Let鈥檚 let them hash it out and circle back.鈥

Welcome to the future (sort of)

Picture this: It鈥檚 2027. You鈥檝e got a disagreement with a colleague鈥攎aybe about professional development models, data privacy policies or whether pineapple belongs on pizza (a classic).

But instead of diving into the usual back-and-forth, you each go to your corners and input your arguments into your large-language models.

Moments later, your LLM and theirs are in a digital arena, going full Rock-Em Sock-Em Robots, swinging citations, footnotes and logical fallacies at each other like they鈥檙e in a debate league from the future.

You wait patiently while the bots do battle. Then, your LLM gently pings:

鈥淕ood news. You won the argument. Here’s the summary and a GIF to celebrate.鈥

You nod, satisfied. Your colleague鈥檚 LLM disagrees, of course. So now their LLM is arguing with your LLM about how they each interpreted the debate rules. A third LLM moderator is brought in.

And just like that, you’re three levels deep in a logic loop with AIs who are very confident and very wrong鈥 or maybe very right? You鈥檙e not even sure anymore.

LLMs are brilliant. But they鈥檙e not you

This scenario is exaggerated. (Barely.) But the reality is here: we increasingly use LLMs to draft emails, shape arguments, write policies, analyze documents and generate
responses in professional and personal settings.

That鈥檚 not a bad thing. In fact, I rely on one occasionally.

But here鈥檚 the catch: when everyone in the conversation is using an LLM as their thinking assistant, we run the risk of outsourcing the thinking altogether.

We trade in nuance for nicely worded output. We skip reflection because the draft “sounds good.” We assume the logic holds because the paragraphs are coherent.

And suddenly, we鈥檝e got a society full of people reacting to polished prose instead of thinking through messy ideas.

What happens when the bots agree?

Let鈥檚 say both LLMs in our debate agree. They say:

鈥淐onsensus reached. You鈥檙e both right鈥攄epending on your values and priorities.鈥

Well, that鈥檚… great? Or is it? Should we take the LLMs鈥 word for it? Should we still debate each other? Do we even want to?

At what point do we stop engaging and just start delegating?

This is where critical thinking becomes essential鈥攏ot as a retro skill from a pre-AI era, but as a core competency for navigating a world where content is cheap, conversation is synthetic and conviction can be manufactured on demand.

Why it matters in education (and everywhere)

In classrooms and conference rooms, we鈥檙e seeing this already. Students use LLMs to generate essays. Educators use them to plan lessons. Tech leaders use them to write reports, create policies and analyze risks.

None of this is inherently bad. In fact, it’s often wonderful.

But we cannot confuse articulation with understanding, or consensus with truth. We need to:

  • Pause before we accept the fi rst output.
  • Challenge assumptions, even if they鈥檙e phrased beautifully.
  • Engage in conversations ourselves, not just through our digital proxies.

Because at the end of the day, AI can assist with the how鈥攂ut only we can defi ne the why.

What鈥檚 worth fighting for

LLMs are brilliant sparring partners, research assistants, and yes, even debate proxies. But they鈥檙e not a substitute for human judgment.

So yes, use your LLM. Let it help you write the email. Let it help you structure your thoughts. But don鈥檛 forget to bring your brain to the conversation.

The future of thought isn鈥檛 AI vs. AI鈥攊t鈥檚 humans who know when to use it, why they鈥檙e using it and what it means in the bigger picture.

The bots can throw punches. But only we can decide what鈥檚 worth fighting for.


FETC 2026: How a superintendent can see the tech perspective


Stacy Hawthorne, EdD
Stacy Hawthorne, EdD
Stacy Hawthorne, EdD is board chair of CoSN and executive director of the EdTech Leaders Alliance.

Related Articles