Today, I used a blend of smartphone, markdown editor, LLMs, auto transcription and a single meta-prompt to turn my raw thoughts into neat, actionable insights for work. It felt like a magic trick: give an LLM a rough fragment, and it hands back something structured, articulate, and grounded in the accumulated pattern library of human writing [1, 2]. In a sense, it lets one think with more knowledge than one actually holds [3]. Yet the more often this runs, the more something else seems to switch off. A cluster of MIT linked studies on ChatGPT and student writing finds that when AI is used to draft and refine essays, cognitive load drops, output improves, and recall of one’s own work worsens, suggesting weaker underlying processing [4, 5, 6, 7]. Commentators have begun to describe this as cognitive debt: the more effort is offloaded now, the less the brain is forced to encode and consolidate, and the more tempting it becomes to lean on automation again next time [2, 1]. In the learning sciences, this is being sharpened into the idea of metacognitive laziness, where people disengage from the planning, monitoring, and self correction that constitute active thinking [8, 9, 10]. What feels at stake is not domain knowledge but a set of meta skills: sustained attention, the ability to sit inside ambiguity, to reason stepwise, test hypotheses, and hold half finished models in mind long enough for something genuinely new to emerge [11, 1, 12]. These are the muscles that produce judgment, not just fluent text, and several cognitive science and human computer interaction papers on cognitive offloading and epistemic dependence warn that over reliance on AI tools may allow those capacities to atrophy [13, 14, 2]. The question, then, is not “Is AI making us stupid?” but “Where do we place the boundary between ideation and synthesis?” [15, 2]. When AI enters too early, it can preempt the generative friction that produces insight; when it comes later, after the clumsy sketch exists, it behaves more like an amplifier than a replacement [1, 16, 12]. That boundary also defines whether AI is a partner or a prosthetic. A partner contests and extends what you have already articulated; a prosthetic silently performs the cognitive labor for you [3, 14]. Over time, the first can raise the ceiling on what you can do, while the second may quietly reduce your reliance on your own mind [11, 1, 13]. The work now is not to reject these systems, but to deliberately protect the slow, wandering arc between intuition and articulation, so that what accelerates is expression, not the abandonment of thought itself [8, 12]. **Note**: this essay was written by AI based on the initial raw concept on cognitive regression due to over reliance on AI. Literature search was performed, curated and summarised by AI, as well as final formating and accuracy check of the bibliography. The whole article took from inception to final version in less than 25 minutes (significant less than my usual speed of writing). --- ## **References** 1. Polytechnique Insights. “Generative AI: The risk of cognitive atrophy.” 3 July 2025. https://www.polytechnique-insights.com/en/columns/neuroscience/generative-ai-the-risk-of-cognitive-atrophy/ 2. BrainOnLLM. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using Large Language Models.” 31 December 2024. https://www.brainonllm.com 3. Vredenburg, Karel. “AI: A Prosthesis for the Brain.” June 2025. https://karelvredenburg.com/home/2025/6/12/ai-a-prosthesis-for-the-brain 4. Mashable. “Using ChatGPT to write essays may harm critical thinking, MIT study finds.” 20 June 2025. https://mashable.com/article/chatgpt-writing-study-mit-cognitive-cost 5. Boston.com. “New MIT study brings potential downsides of ChatGPT use to light.” 20 June 2025. https://www.boston.com/news/local-news/2025/06/20/new-mit-study-brings-potential-downsides-of-chatgpt-use-to-light/ 6. Towards AI. “How ChatGPT Cut Cognitive Load by 47 percent in MIT Study.” 2025. https://pub.towardsai.net/from-insight-to-amnesia-how-chatgpt-cut-cognitive-load-by-47-in-mit-study-b5fdd7121ba1 7. TIME. “ChatGPT’s Impact on Our Brains According to an MIT Study.” 16 June 2025. https://time.com/7295195/ai-chatgpt-google-learning-school/ 8. University of Auckland. “When AI tools promote a form of metacognitive laziness.” 1 October 2025. https://www.auckland.ac.nz/en/news/2025/10/02/When-ai-tools-promote-metacognitive-laziness.html 9. Emergent Mind. “Metacognitive Laziness in AI and Human Learning.” 19 December 2025. https://www.emergentmind.com/topics/metacognitive-laziness 10. arXiv. “Beware of Metacognitive Laziness.” arXiv:2412.09315, 11 December 2024. https://arxiv.org/abs/2412.09315 11. Pursuit (University of Melbourne). “As AI gets smarter, are we getting dumber?” 2025. https://pursuit.unimelb.edu.au/articles/as-ai-gets-smarter,-are-we-getting-dumber 12. Harvard Gazette. “Is AI dulling our minds?” 12 November 2025. https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/ 13. PubMed Central. “Cognitive offloading or cognitive overload? How AI alters the mental landscape.” https://pmc.ncbi.nlm.nih.gov/articles/PMC12678390/ 14. IEEE Computer. “Epistemic Partner or Cognitive Crutch: A Conceptual Model of AI Assisted Cognition.” https://www.computer.org/csdl/magazine/ic/5555/01/11222988/2bgx2pBD7os 15. The Conversation. “Is ChatGPT making us stupid?” 24 July 2025. https://theconversation.com/is-chatgpt-making-us-stupid-255370 16. Advai Limited. “The Impact of AI Usage on Cognition.” 29 October 2025. https://www.advai.co.uk/journal/posts/the-impact-of-ai-usage-on-cognition/