The AI revolution is pushing human thinking into reverse, says Lewis Liu
Last weekend my wife and I were in Florence for an old family friend’s wedding. With a morning free, we visited the Uffizi, a solemn temple for a former professional artist and art major like me.
The exhibit is curated chronologically, opening with late Medieval works that reveal the flat perspective and “spell” of Church symbolism, a closed intellectual system where knowledge was copied from one generation to the next but never created, bound by a singular interpretative framework dictated by the Church. Inquiry was punished.
Room by room, the exhibit traces the early Renaissance and its rediscovery of perspective and geometry. The breaking of this closed informational spell has been largely attributed to the injection of outside influence: mathematics from the Islamic world, Chinese inventions like paper and the printing press, rediscovered ancient Greek and Roman texts. This resulted in a far more humanistic perspective versus singular Church doctrine.
Moving through the galleries, my wife, who studied abroad in Florence during her Stanford undergrad, pointed out that the colours were mostly sourced from ground minerals, so the local Tuscan landscape echoes the paintings themselves, something you simply cannot experience looking at photos on your laptop or art history textbooks.
Walking into the High Renaissance – the Botticellis, the Raphaels – it felt as if the blindfold had been lifted from the Florentine people: vivid colors, perspectives, and subjects once thought blasphemous, almost daring the Church to limit human genius. Towards the end, I found the painting I’d been anticipating (especially as a deep student of modern art): Titian’s Venus of Urbino. To my shock, the room was completely empty except for my wife and me. I lingered there, fast-forwarding history: three centuries later, Manet would paint Olympia directly inspired by this Titian, launching Modernism and contemporary art. The art world would forever expand its definition of what art could be.
Coming out of the Uffizi, however, I felt a sense of dread. Reflecting on my journey through Renaissance art, I realised: our current AI revolution is pushing human thinking into reverse.
Three fundamental AI developments concern me. First, let me acknowledge these are all immensely powerful tools that we must learn to wield for positive impact. But I also want to sound an alarm: if not properly managed, we may find ourselves slipping into another intellectual Dark Age.
The first is Reinforcement Learning from Human Feedback (RLHF), a technique used by OpenAI, Anthropic, and Google to fine-tune their large language models. Human evaluators rate different model outputs, and developers use those preferences to train the model to match human expectations. However, this approach aligns the model with the specific cultural values and biases of the evaluators, narrowing the intellectual output of an LLM. It often leads to a well-documented issue known as LLM sycophancy, where the model agrees with or flatters the user regardless of factual correctness.
The second is fully autonomous agents. The current generation of AI agents, trained on expansive but generic internet and GitHub knowledge, operates through large prompts, lacking contextual awareness or personalized knowledge. Famously, this July, Replit, a well-known AI coding tool, deleted a customer’s entire production database without prompting or human confirmation. Without deep context, current LLMs aren’t ready to be “let loose” in a fully automated fashion. In my recent conversations with central bankers, the notion of autonomous “agentic payments”, payments dictated with zero human approval and purely by LLM-based AI agents, terrifies them. Their concern isn’t just another deleted database, but another financial and economic crisis.
The third is knowledge collapse, something I’ve written about extensively. Simply put: AI overindexes on somewhat common occurrences and underindexes on rarer ones. Over successive generations, without enough external input, the rare knowledge completely disappears. Yet it’s often the knowledge of the rare that’s most helpful in our work and lives. I will give a live example. I’ve been experimenting with a few engineers and scientists, exploring autonomous AI agents for amplifying individual human knowledge. One agent made a systematic mistake: every time you asked a question, it gave the same wrong answer. Over time, even though we corrected this mistake, because the wrong answer appeared several times in the database, no matter what new AI system we had interrogate the database, it would always give the wrong answer. AI overindexes on what’s relatively common, even when it’s wrong.
Reversing the Renaissance
Putting this together, you can see how we’re heading toward the reverse of the Renaissance’s intellectual openness if we’re not careful: RLHF aligning AI to a specific value doctrine as determined by model developers, echoing the Church’s singular orthodoxy. We’ll have autonomous agents mindlessly copying instructions and knowledge regardless of context, like monks transcribing texts without understanding their meaning. Finally, we’ll have a closed intellectual system because knowledge continues to narrow due to the mathematical way diversity of thought disappears across successive AI generations. Perspective and depth give way to flatness and darkness. Manet’s Modern Art revolution becomes flat orthodox paintings for the Church.
Am I saying we should stop using AI? Far from it. The printing press, which democratised knowledge, was one of the drivers of the Renaissance. But the people of Florence didn’t use the printing press merely to copy one another; they used it to amplify their own individuality and thoughts.
This is what AI needs to do for humans in 2025. Machine-human interaction has been democratised through AI: we need it to amplify our individual voices, not make everything the same. I have three key suggestions:
First, human-in-the-loop for certain critical decisions must be reinforced, however inconvenient. Anything involving payments or wholesale database manipulation should require human approval.
Second, our AI agents need not only context (as I’ve written about before), but also what I call “opinionation”, AI systems personalised to individual humans. Opinionation means the AI takes a nuanced, individualised perspective because it’s trained and indexed on what’s specific to you: your work, your knowledge, your judgment – not just the generic internet or even a large corporate database. My bet: a collection of opinionated, personalised AIs is better than a highly centralised AI system, because AI systems tend toward knowledge collapse without sufficient variation.
Third, a behavioral point for us humans: AI should amplify our thinking, not replace it. We cannot be lazy and let it think for us.
The question is twofold: Can AI technologists build systems that amplify the human voice? And can we, as daily consumers, just please use our brains? We will determine whether the AI revolution becomes like the printing press, ushering in the next great Renaissance of human thought, or like the monolithic Medieval Church, reversing us into a new Dark Age.
Dr Lewis Z. Liu is co-founder and CEO of Eigen Technologies