top of page
Search

Is AI Killing Critical Thinking Skills?

  • Writer: Bernice Loon
    Bernice Loon
  • May 27
  • 5 min read

If I were a student today, I would be turning to Generative AI tools like ChatGPT every now and then, enjoying the surge of confidence when a language model untangles a tricky idea in seconds and feeling a quiet jolt of satisfaction when an on-screen marker pinpoints the step where my algebra went wrong. At the same time I would guard the understanding I earn on paper, knowing that genuine insight cannot be downloaded.


This blend of excitement and caution shapes the article that follows. In this article, I will focus on how AI already supports learning in classrooms, celebrates the progress made, highlights the hidden risks, as well as a wild proposal of assessment reform that keep critical thinking firmly in charge.



AI In Everyday Lessons

In Singapore schools, open the Student Learning Space (SLS) and you will find a set of intelligent helpers woven into ordinary teaching. Authoring Copilot can propose a lesson outline for teachers, complete with warm-up questions, practice tasks and quizzes. Short-Answer and Mathematics Feedback Assistants mark drafts within seconds, adding colour-coded hints so pupils see exactly where to improve. An Adaptive Learning System (ALS) quietly tracks each response and chooses the next question at just the right level, stretching strong learners while supporting those who need more practice. All these services sit inside the EdTech Masterplan 2030, which places AI literacy alongside cyber well-being as a core digital skill and offers self-paced tutorials that explain where language models can mislead.



Using AI As Leverage, Not A Crutch

I think of AI the way a careful investor thinks about borrowed capital. You do not take on debt until you understand the business inside out; otherwise leverage will magnify your ignorance instead of your returns.


In study the sequence is similar.


Take the learning of the Tectonics Cluster in Geography for example. First, build your own grasp of a topic. Learn the names of the major tectonic plates and recognise examples of major plate boundaries, describe what happens at subduction zones and explain to a classmate why Chile and Japan experience frequent but differing patterns of seismic risk. Only after that groundwork should you open a prompt window on a Generative AI tool. Ask the model for five further factors that determine how severely an earthquake affects a country, such as building codes, depth of the focus, soil type, emergency communication systems or population distribution. Then scrutinise the list. Keep explanations that align with evidence, discard those that do not and rewrite the argument in your own words, clearly stating why each factor deserves its place. That filtering and reshaping process is the core of critical thinking, and like sound capital, it compounds in value throughout a lifetime.


From my point of view, the use of AI for learning has presented three clear advantages. First, students now receive feedback almost instantly, so they can correct mistakes while ideas are still fresh rather than waiting days for marked work. Second, every learner follows an adaptive pathway, because the system raises or lowers difficulty in real time, preventing frustration for those who struggle and boredom for those who advance quickly. Third, teachers gain more time for high-value work, since Authoring Copilot handles much of the routine lesson preparation, freeing teachers to mentor individuals, design enrichment activities and liaise with parents.



Where Vigilance Is Needed

The glow of progress can hide fine cracks. OECD researchers warn that polished AI prose may tempt students to mistake fluency for understanding. UNESCO reminds schools that algorithms embed values, so unquestioned adoption can narrow debate.


Indeed, Generative AI comes with some intertwined limitations that educators and students must confront with clear eyes. Hallucination stands first. Large language models can invent plausible-sounding facts, statistics and even academic references that simply do not exist. A student who has not yet built a firm base of subject knowledge may copy these fabrications into revision notes or essays, unknowingly seeding error at the very foundation of learning.


Closely linked is confirmation bias on fast-forward. An AI system often mirrors the framing of the prompt it receives, so if a student asks a loaded question the model will readily reinforce the initial assumption rather than challenge it. This accelerates an ancient cognitive trap: we seek data that fits our worldview and overlook what contradicts it. Counteracting this tendency demands deliberate adversarial prompting where one asks the model to argue from the opposite stance, and compare competing outputs.


Third is the illusion of mastery. AI produces polished prose in seconds, and the smoothness of an answer can deceive a student into believing genuine understanding has been secured. Yet true mastery and learning involves wrestling with uncertainty, explaining ideas aloud and reconstructing arguments from memory. If students bypass that struggle, comprehension may collapse when they must apply the concept in an unfamiliar context such as a high-stakes examination or a real-world problem.


And lastly, a fourth limitation is the erosion of foundational skills. Consistent reliance on auto-generated summaries, instant paraphrases and AI-crafted outlines can weaken the mental muscles developed through note-taking, memory retrieval and structured writing. These abilities cultivate attention, synthesis and logical sequencing. Without them, students may struggle to produce coherent arguments when connectivity fails or to solve multi-step problems in settings where AI assistance is restricted.



A “Crazy” Idea

So this brings me to a "crazy" idea that I've been thinking about. What if every classroom ran a year-long experiment called The Thinking Audit? At first glance it sounds mad: students post their entire reasoning process online, teachers mark less of the finished essay and more of the messy draft, and parents can log in to watch the intellectual wrestling match unfold. Yet exactly this kind of radical transparency could hard-wire the habits that keep human judgement ahead of artificial shortcuts.


Under a Thinking Audit every pupil keeps a living portfolio of their interactions with AI. Each time the model gives an answer, the student records three moves.


  1. Adversarial prompting: ask the model to refute its first claim and capture both sides of the debate.

  2. Source double-checking: attach at least two outside references, from journal articles to government data, that either confirm or contradict the model.

  3. Reflective journalling: write a short note explaining why certain points were kept, adapted or rejected.


And since these artefacts are visible to teachers, peers and parents, the routines become second nature, like fastening a seat belt before driving off.



Assessing Reasoning Instead of Recall

Traditional exams award marks for the polished answer. In a Thinking Audit the weight shifts to the route taken.


Picture a Tectonics task: a magnitude 7 earthquake hits Japan and Peru on the same day. Students may open textbooks and online sources but must explain why the two governments respond differently, tracing three constraints and incentives in each country. They upload an annotated transcript showing every prompt tried, how evidence was chosen and where contradictions were resolved, then defend one critical decision in a five-minute viva. The mark scheme rewards clarity of logic, relevance of data and honesty about uncertainty. Copying a chatbot’s first draft is useless because the grading lens is fixed on process, not polish.


A Thinking Audit may sound like a wild idea, yet it can potentially tackle the real problem head-on:

AI makes answers cheap, so schools must make reasoning priceless.


Final Thoughts

Generative AI feels like an exhilarating gift, offering feedback at the speed of thought and shaping practice with mathematical precision. Yet the deepest measure of education is still the clarity and courage of a student's own thinking. When assessments recognise the journey as well as the destination, young people will treat AI as trusted leverage that amplifies well-founded judgement rather than a mask hiding fragile foundations.


Students who can explain, test and refine their ideas will guide the machines of the future with human insight and moral purpose, rather than be guided by them.

 
 
 

Comentarios


© 2025 by Bernice Loon

bottom of page