ChatGPT will make you dumber – if you don’t learn how to use it

ChatGPT will make you dumber – if you use it incorrectly. Lewis Z Liu outlines how to use the tech to make you more intelligent, not less

I’m writing this column just after visiting the Taj Mahal in India this past weekend. The experience, far more powerful in person than in photographs, stands as one of the most magnificent manifestations of human genius.  

Unfortunately, the human species appears to be becoming “dumber”; or at least, our demonstrated mental faculties have been steadily declining worldwide since around 2012. Metrics around cognitive challenges, reading skills and quantitative skills have all worsened precipitously across the world. This decline aligns closely with the rapid rise of smartphone usage (2012 marked the year when global smartphone adoption surpassed 50 per cent), reduced engagement with reading and growing information overload, including from social media. 

More concerning, however, is that this assessment doesn’t yet factor in the accelerating role of AI, which, based on current trajectories, will most certainly exacerbate this decline, negatively impacting human creativity and productivity. This raises a critical question: how can we leverage AI to enhance our cognitive abilities rather than diminish them?

Did calculators make us worse at maths?

When dining out, my companions often turn to me at the end of the meal, expecting that my “PhD in Physics from Oxford” qualifies me to quickly split the bill. Ironically, and to their surprise, I’m usually the worst in the group at performing long division mentally. Instead, I promptly use my iPhone’s calculator app. In fact, most mathematicians and physicists I know struggle with basic arithmetic; our expertise lies primarily in dealing with complex differential equations, analysing prime number distributions or exploring similarly abstract concepts.

Yet I vividly recall, as a diligent child of Chinese immigrants, repeatedly working through countless Kumon math exercises until I had memorized all the arithmetic combinations, only to eventually forget them. What endured from that rigorous drilling wasn’t the rote memorization, but a deeper understanding of arithmetic’s underlying patterns and the relationships between numbers. In fact, if you ask most physicists or mathematicians to explain something as seemingly simple as long division, you’ll likely end up in an hours-long conversation about the intricacies of number theory or calculus.

One of the remarkable aspects of mathematics is its layered structure, building logically from one concept to the next. Counting progresses naturally into arithmetic, arithmetic into algebra, algebra into geometry, geometry into trigonometry, then calculus, enhanced by linear algebra, ultimately leading to the fundamental principles underpinning neural networks – the foundation of the very LLMs we use today. Indeed, I can trace a direct connection from my current understanding of LLMs back to counting on my fingers, even recalling specific pages from textbooks or particular math classes where each of these concepts first clicked for me.

The existence of tools such as calculators, Excel and computer-based mathematical simulations doesn’t diminish my mathematical knowledge or abilities. Instead, these tools enhance my capabilities, freeing me to explore new frontiers by eliminating the burden of repetitive calculations.

How can we use ChatGPT to make us smarter?

Now let’s consider LLMs, specifically chatbot interfaces themselves, rather than the wide range of applications built upon them. Unlike math, language is deeply intertwined with our notions of consciousness, making it challenging to construct a clear, hierarchical “technology tree” similar to mathematics. Consequently, it’s less clear precisely how to integrate LLMs effectively into writing, analysis or tasks like creating presentations. Should LLMs help structure initial thoughts, or are they better suited to refining content at the end? Or perhaps the steps in between? 

Last week, I was jamming on the latest GPT 4.5 with Ajay Agrawal, the founder and CEO of Sirion (the company that acquired mine), and he said something that struck me as profoundly insightful: “ChatGPT is like Socrates, if you have a sharp mind, it is immensely powerful. If not, then it is utter garbage.” In many ways, it’s just a tool, like a calculator. If you don’t understand the broader problem you’re trying to solve, punching in square roots or subtraction sequences is meaningless. But if you have a clear grasp of the underlying issue, a calculator, Excel or other computational tools can save you hours, or even millennia, of human effort.

What this means is that, regardless of whether we use LLMs or not, the human mind must remain sharp, logical and focused, just as it must when using a calculator. Just as my Kumon long division drills helped build mathematical intuition, consistent, thoughtful writing without the aid of LLMs together with genuine introspective thinking from childhood through adulthood is essential for training and strengthening the mind.

Don’t let ChatGPT make you lazy

This has significant implications for educators and for parents like me. As a society, we have a choice: we can take the lazy route and allow unchecked use of LLMs in our children’s education, almost certainly accelerating the decline of our collective cognitive abilities. Or we can adopt a more deliberate, structured approach, much like we did with calculators. Just as we wait until students fully grasp arithmetic before handing them a calculator, we should ensure children first learn to write, read, reason and debate without relying on AI.

My boys, aged six and seven, still do math worksheets the old-fashioned way and write out their thoughts by hand. I expect them to keep doing so for years. Only once they’ve built a solid intellectual foundation should tools like LLMs enter the picture: not to think for them, but to amplify a mind already trained to think clearly on its own.

Standing before the Taj Mahal, a monument of astonishing architectural beauty, immense craftsmanship and mathematical precision, I was struck by what it represented: human intellect, love and physical creation fused into something transcendent. Its chief architect, Ahmad Ma’mar Lahori, worked without 3D modelling software, without AI, without modern machinery. He didn’t lack tools because he mastered fundamentals. Had he lived today, I’m convinced he would have used AI not as a crutch, but as a multiplier, perhaps to build something even grander, even more magnificent. That’s the mindset we need.  That’s the point: LLMs should serve to expand human ingenuity, not replace it. We must not let laziness allow these tools to diminish what makes us human.

Related posts

Dazn eyes UK partner deal for Club World Cup rights

Levy slams Spurs critics: We cannot spend what we do not have

Large chunk of in-house lawyers rate top law firms as ‘poor’ value for money